Chatbots and other AI tools can be a great help during your studies - read more about how to use them best and how to apply critical thinking.
Using AI Tools - At Your Own Risk
Always use AI tools thoughtfully. You are responsible for any input and therefore also the output. Be aware that AI tools are not always GDPR-compliant. Therefore, inputs and prompts must never contain confidential and/or sensitive personal information.
We recommend verifying information and sources received from AI tools.
Be aware that AI tools can hallucinate in their output, which can also be influenced by biases or stereotypes in the training data.
Do not upload copyrighted material to AI tools unless VIA has a special agreement with the provider that allows such use – ask VIA Library if you are in doubt.
-
What is Generative AI?
What is artificial intelligence?
Artificial intelligence or AI refers to computer technologies that simulate human intelligence.
AI is often referred to as chatbots and generative AI (GAI). Generative AI is characterized by its ability to create new content, including text, images, audio, and video.How does generative AI work?
Generative AI works by using large language models (LLM - Large Language Models), which are complex algorithms designed to predict the next word in a sentence. These models are trained on extensive datasets from the internet, enabling them to recognize and mimic linguistic patterns. Through an artificial neural network, the language model analyzes these patterns, and through conversation, generates output that appears intelligent and meaningful. It is possible through text prompts to generate images, audio, video, and more.Which language models can be used?
Since OpenAI introduced their large language model ChatGPT in November 2022, a multitude of other large language models have emerged. In addition to ChatGPT, Microsoft's Copilot is also a popular chatbot that can be used for a variety of tasks. Popular text-to-image models include Dall-E 3 and Midjourney.Note: Do not upload copyrighted material as well as GDPR and sensitive personal information to Gen AI tools (language models).
-
What is Prompting?
What is prompting?
Prompting is what you do when you provide a text input into a large language model like ChatGPT or Copilot. With a well-formulated prompt, you can better utilize the language model to create content as needed. This process allows you to leverage generative AI’s ability to generate texts, ideas, or solutions by formulating prompts that are clear and targeted.How do you prompt?
Consider language models like ChatGPT and Copilot as "blind experts" who have a broad knowledge base and therefore require clear context to deliver in-depth answers. Give them specific, detailed prompts to avoid superficial and generic responses, and instead get targeted and useful outputs. This involves clearly formulating your request, including key information, and specifying the purpose of your inquiry.It is important to continuously adjust and qualify the outputs provided by the language model through conversation. Remember, the more context and precise instructions you give the language model, the better the quality of its output will be.
Read more about different prompt strategies here.
-
Why is Source Criticism Important?
Being critical of sources when working with a language model places great demands on you as a user. You cannot be critical of sources in language models as you would be with other sources. Normally, source criticism involves looking at the relationship between sender, message, and receiver. But this is not possible when working in a language model, as its output is generated from a mixture of various data from the internet and therefore does not have a clearly defined sender.
Remember to validate
It is important not to work in a language model as you would in a search engine or database. Consider a language model like ChatGPT and Copilot as a conversation partner rather than a generator of correct answers. If you use a language model to obtain factual information, it is important to validate and compare this information with other more reliable sources. It happens that a language model will confidently produce incorrect information.When language models hallucinate
When a language model like ChatGPT or Copilot provides false information, it is said to be "hallucinating." As advanced probability machines, they sometimes guess incorrectly. When language models hallucinate, they create information that is not based on actual facts but on erroneous connections from their training data. This phenomenon occurs because languageWhy is Source Criticism Important?
Being critical of sources when working with a language model places great demands on you as a user. You cannot be critical of sources in language models as you would be with other sources. Normally, source criticism involves looking at the relationship between sender, message, and receiver. But this is not possible when working in a language model, as its output is generated from a mixture of various data from the internet and therefore does not have a clearly defined sender.Remember to validate
It is important not to work in a language model as you would in a search engine or database. Consider a language model like ChatGPT and Copilot as a conversation partner rather than a generator of correct answers. If you use a language model to obtain factual information, it is important to validate and compare this information with other more reliable sources. It happens that a language model will confidently produce incorrect information.When language models hallucinate
When a language model like ChatGPT or Copilot provides false information, it is said to be "hallucinating." As advanced probability machines, they sometimes guess incorrectly. When language models hallucinate, they create information that is not based on actual facts but on erroneous connections from their training data. This phenomenon occurs because language models generate responses by mimicking linguistic patterns rather than understanding actual truths.What is bias?
It is important to understand that there is embedded bias in the training material that is the foundation for the output of language models. Bias is data that misrepresents reality because it leans in a certain direction or favors certain ideas, often due to unfair assumptions or stereotypes. Since language models like ChatGPT and Copilot learn from a wide range of internet content, they can embed and perpetuate existing prejudices or stereotypes found in their training data. This can result in biased or skewed responses that reflect bias. Therefore, it is essential when working with language models to be aware of this risk and actively seek to identify and correct for bias by supplementing with diverse and balanced sources.Are you aware of how you feed language models?
In the free version of ChatGPT and Copilot, your inputs are used to improve the models. If you log in to Copilot with your VIA user, your data is protected by commercial data protection and is not used for model training. You can read more here.To summarize the above: To use AI critically, you must
- Understand how a large language model works (Read more in the section "What is generative AI?")
- Consider the language model as a conversation partner rather than a search engine
- Know that it sometimes hallucinates and therefore you must always be aware of validating the factual information it provides
- Be aware of the underlying bias embedded in the language model's training material
AI as a Study Buddy
Due to their ability to deliver intelligent and meaningful conversations, language models like ChatGPT and Copilot are ideal sparring partners. This also applies in an academic context. Here are a handful of techniques and methods you can use during your studies.
Check out the University of Copenhagen's online course Generative AI as a Study Buddy.
-
Get Help with Writing Assignments
Use a language model to:
- Proofread your text and fix punctuation.
- Suggest sentence ideas or short texts you can build on.
- Translate from one language to another.
- Propose a structure or headings for sections in your assignment.
-
Have a Conversation to Deepen Your Understanding
- Definition questions: Ask the language model to explain a concept or theory in simple terms. For example: “Explain this concept to me like I’m 12 years old.”
- Comparison questions: Ask about similarities and differences between theories, concepts, or methods.
- Application-based questions: Ask for examples of how a theory can be applied in real-life situations.
- Critical questions:Challenge assumptions or conclusions in a text. For example: “What are some common criticisms of this theory?”
- Reflective questions:Ask how the information relates to your own experiences. For example: “How can I connect this to my previous experience (describe the experience) from practice?”
-
Get Feedback from a Language Model
- Paste in text from your assignment or project and ask for feedback on things like clarity, flow between sections, and academic tone.
- Use a language model in the early stages of your work to brainstorm and get creative input.
- Ask the model to act as an examiner and quiz you on a specific topic.
- Then ask for feedback on your answers and performance.
Always remember to critically evaluate the model’s output. You can read more in the section “Why is source criticism important?”