Chatbots and other AI tools can be a great help during your studies - read more about how to use them best and how to apply critical thinking.
Using AI Tools - At Your Own Risk
Always use AI tools thoughtfully. You are responsible for any input and therefore also the output. Be aware that AI tools are not always GDPR-compliant. Therefore, inputs and prompts must never contain confidential and/or sensitive personal information.
We recommend verifying information and sources received from AI tools.
Be aware that AI tools can hallucinate in their output, which can also be influenced by biases or stereotypes in the training data.
Do not upload copyrighted material to AI tools unless VIA has a special agreement with the provider that allows such use – ask VIA Library if you are in doubt. Read more here.
-
What is Generative AI?
What is artificial intelligence?
Artificial intelligence or AI refers to computer technologies that simulate human intelligence.
AI is often referred to as chatbots and generative AI (GAI). Generative AI is characterized by its ability to create new content, including text, images, audio, and video.How does generative AI work?
Generative AI works by using large language models (LLM - Large Language Models), which are complex algorithms designed to predict the next word in a sentence. These models are trained on extensive datasets from the internet, enabling them to recognize and mimic linguistic patterns. Through an artificial neural network, the language model analyzes these patterns, and through conversation, generates output that appears intelligent and meaningful. It is possible through text prompts to generate images, audio, video, and more.Which language models can be used?
Since OpenAI introduced their large language model ChatGPT in November 2022, a multitude of other large language models have emerged. In addition to ChatGPT, Microsoft's Copilot is also a popular chatbot that can be used for a variety of tasks. Popular text-to-image models include Dall-E 3 and Midjourney.Note: Do not upload copyrighted material as well as GDPR and sensitive personal information to Gen AI tools (language models).
-
What is Prompting?
What is prompting?
Prompting means writing a text message (a prompt) to a language model such as ChatGPT, Claude, or Copilot to get it to respond or solve a task. A good prompt helps the model understand what you need—such as a suggestion, an explanation, or a draft of a text. The better you formulate your prompt, the more relevant and useful the result will be.How do you prompt?
Think of the language model as a helpful but context-blind expert: it knows a lot but doesn’t automatically understand what you want. So be clear and precise. Explain what you need, why you need it, and what the answer should look like—for example, in terms of length, tone, or target audience. Use specific questions and provide examples if relevant.Remember, prompting is a process. You can (and should) adjust your prompts along the way and ask the model to elaborate, improve, or change its response. The more relevant information and clear direction you provide, the better the output will be.
-
Prompt Hacks
When prompting, you can use the following prompt hacks:
- "Do you understand the task?": End your prompt with this question. This way, you get the language model to repeat the described instructions, and you will discover if it has misunderstood the intention of your prompt.
- Format: Language models can deliver outputs in various formats. Therefore, consider how you want the output from your prompt. For example: In bullet points, as an email, table, or matrix.
- Tone: Do you want the language model to deliver in a formal or informal tone? Academic or humorous tone? Precise or poetic?
- Step-by-step: Asking the language model to deliver output in steps makes it better at following the progression of the conversation. Example: “Introduce yourself to the student and inquire about their (insert need/request). Wait for a response. Then ask which education they are pursuing and explain that the question helps tailor the academic level of your questions. Wait for a response and then inquire about…”
-
Tips and Tricks for Prompting Images
When working with generating images in text-to-image models, it is important to be precise in your description of the desired image. Therefore, consider the following:
- Subjects/objects: Person, animal, thing, fantasy creatures.
- Interaction: How do the different elements interact with each other?
- Medium: Photography, painting, cartoon, sculpture, etc.
- Style: “In the style of…” Pop art, Dali, Disney, Alfred Hitchcock films, etc.
- Environment: Indoors/outdoors, desert, jungle, underwater, Gotham City, etc.
- Mood: Creepy, calm, feverish dream, energetic, etc.
- Perspective: Frog/bird's-eye view, close-up, POV, etc.
Copilot offers image generation with the same tool, DALL-E 3, found in ChatGPT 4. When using AI-generated images, it is good practice to mention it if you use them in assignments, presentations, or similar.
-
Build an Advanced Prompt
Building a structured, advanced prompt is also known as prompt engineering. Prompt engineering is the art of fine-tuning and structuring requests to language models to optimize and target the generated responses to specific needs or goals.
Below is a template that illustrates the different elements of an advanced prompt:
-
Why is Source Criticism Important?
Being critical of sources when working with a language model places great demands on you as a user. You cannot be critical of sources in language models as you would be with other sources. Normally, source criticism involves looking at the relationship between sender, message, and receiver. But this is not possible when working in a language model, as its output is generated from a mixture of various data from the internet and therefore does not have a clearly defined sender.
Remember to validate
It is important not to work in a language model as you would in a search engine or database. Consider a language model like ChatGPT and Copilot as a conversation partner rather than a generator of correct answers. If you use a language model to obtain factual information, it is important to validate and compare this information with other more reliable sources. It happens that a language model will confidently produce incorrect information.When language models hallucinate
When a language model like ChatGPT or Copilot provides false information, it is said to be "hallucinating." As advanced probability machines, they sometimes guess incorrectly. When language models hallucinate, they create information that is not based on actual facts but on erroneous connections from their training data. This phenomenon occurs because languageWhy is Source Criticism Important?
Being critical of sources when working with a language model places great demands on you as a user. You cannot be critical of sources in language models as you would be with other sources. Normally, source criticism involves looking at the relationship between sender, message, and receiver. But this is not possible when working in a language model, as its output is generated from a mixture of various data from the internet and therefore does not have a clearly defined sender.Remember to validate
It is important not to work in a language model as you would in a search engine or database. Consider a language model like ChatGPT and Copilot as a conversation partner rather than a generator of correct answers. If you use a language model to obtain factual information, it is important to validate and compare this information with other more reliable sources. It happens that a language model will confidently produce incorrect information.When language models hallucinate
When a language model like ChatGPT or Copilot provides false information, it is said to be "hallucinating." As advanced probability machines, they sometimes guess incorrectly. When language models hallucinate, they create information that is not based on actual facts but on erroneous connections from their training data. This phenomenon occurs because language models generate responses by mimicking linguistic patterns rather than understanding actual truths.What is bias?
It is important to understand that there is embedded bias in the training material that is the foundation for the output of language models. Bias is data that misrepresents reality because it leans in a certain direction or favors certain ideas, often due to unfair assumptions or stereotypes. Since language models like ChatGPT and Copilot learn from a wide range of internet content, they can embed and perpetuate existing prejudices or stereotypes found in their training data. This can result in biased or skewed responses that reflect bias. Therefore, it is essential when working with language models to be aware of this risk and actively seek to identify and correct for bias by supplementing with diverse and balanced sources.Are you aware of how you feed language models?
In the free version of ChatGPT and Copilot, your inputs are used to improve the models. If you log in to Copilot with your VIA user, your data is protected by commercial data protection and is not used for model training. You can read more here.To summarize the above: To use AI critically, you must
- Understand how a large language model works (Read more in the section "What is generative AI?")
- Consider the language model as a conversation partner rather than a search engine
- Know that it sometimes hallucinates and therefore you must always be aware of validating the factual information it provides
- Be aware of the underlying bias embedded in the language model's training material
AI as a Study Buddy
Due to their ability to deliver intelligent and meaningful conversations, language models like ChatGPT and Copilot are ideal sparring partners. This also applies in an academic context. Here are a handful of techniques and methods you can use during your studies.
Check out the University of Copenhagen's online course Generative AI as a Study Buddy.
-
Get Help with Writing Assignments
Use a language model to:
- Proofread your text and fix punctuation.
- Suggest sentence ideas or short texts you can build on.
- Translate from one language to another.
- Propose a structure or headings for sections in your assignment.
-
Have a Conversation to Deepen Your Understanding
- Definition questions: Ask the language model to explain a concept or theory in simple terms. For example: “Explain this concept to me like I’m 12 years old.”
- Comparison questions: Ask about similarities and differences between theories, concepts, or methods.
- Application-based questions: Ask for examples of how a theory can be applied in real-life situations.
- Critical questions:Challenge assumptions or conclusions in a text. For example: “What are some common criticisms of this theory?”
- Reflective questions:Ask how the information relates to your own experiences. For example: “How can I connect this to my previous experience (describe the experience) from practice?”
-
Get Feedback from a Language Model
- Paste in text from your assignment or project and ask for feedback on things like clarity, flow between sections, and academic tone.
- Use a language model in the early stages of your work to brainstorm and get creative input.
- Ask the model to act as an examiner and quiz you on a specific topic.
- Then ask for feedback on your answers and performance.
Always remember to critically evaluate the model’s output. You can read more in the section “Why is source criticism important?”