Learning Concerns
Learning Objectives
- You know of some of the concerns related to learning while using large language models.
When one uses large language models for completing tasks, one learns to use large language models, but not necessarily to complete the tasks by themselves. This should not be surprising — one could see similarities between using large language models and plagiarism, where both can involve copying and possibly altering content, but not learning to produce the content themselves.
There have been long-standing arguments on what sorts of tools should be available for learning. As an example of a classic debate, there still exists arguments for and against the use of calculators in math education. While using a calculator can lead to a correct answer, the effort and practice of doing the brainwork is omitted. However, if the learner is already proficient in math, using a calculator can be a time-saving tool.
The same can be said for using large language models; if the learner is already proficient in the subject, using a large language model can save time.
There are considerable differences in the amount of time it takes to solve programming problems between individuals. In a similar fashion, there are differences in the time it takes to complete tasks using large language models, depending on the skills related to large language model use and task-related knowledge and skills.
However, if the learner is not proficient in the subject, using a large language model can lead to a completed task, but with little learning. In addition, if the learner does not know much about the subject, using a large language model can also lead to forming of misconceptions in part due to the possible hallucinations.
A relevant phenomenon is also automation bias, where humans are inclined to trust the outputs of an automated system, even if it is incorrect.
The interesting question is how to balance learning and performing. In academic settings and in education, the main objective is (almost) always learning, so there is not much to balance.
In professional life, on the other hand, the main objective is often to perform and to complete tasks; learning while doing so can even be secondary. In such cases, it can be meaningful to use large language models even if one would not know about the subject. However, if the tasks can be completed with a large language model without much knowledge, one should also consider what the value that they bring to the table is, and if not learning about the subject can potentially lead to problems in the future.
The awareness of the risks of relying on large language models in terms of learning is also slowly increasing. There, for example, already exists some evidence of e.g. software developers stopping the use of large language models due to concerns that include not learning while using them.