Prompt Engineering#
This section is all about prompting, a fundamental skill for using AI effectively. Being able to prompt well is like having the master key to a treasure chest: it unlocks the full potential of what AI can offer. Whether you’re new to prompting or already have a foundation, I’d recommend starting with the video. Even if the basics seem familiar, it offers a fresh perspective and introduces a six-part formula for crafting effective prompts.
Summary#
Define the Task Clearly: State what you want the AI to do. Examples: “Generate potential research questions related to…”, “Summarize key findings from these research papers…”, “Create a syllabus outline for a course on…”.
Provide Context: Provide background information. Who is the target audience (students, fellow researchers)? What are the learning objectives? Example: “Create a syllabus outline for a first-year bachelor course on sustainable engineering, focusing on circular economy principles.”
Supply exemplars: Give examples of the desired output. This drastically improves quality. Example: If asking for research questions, provide examples of well-formulated research questions in the relevant field. If asking for a syllabus, provide an example syllabus or a link to one.
Assign a Persona: Tell the LLM who it should be “acting” as. Example: “Act as a leading expert in blockchain…” or “Act as an experienced TU Delft professor designing a new course…”. This adds context implicitly. Instead of explicitly stating “professors value conciseness, passion for the subject, and use simple language,” you simply say “act as a university professor,” and the LLM draws on its training data to infer these characteristics.
Specify the Format: Tell the LLM how you want the output formatted. Examples: “Use bullet points,” “Create a table,” “Write a concise abstract,” “Format as a LaTeX document.”
Set the Tone: Define the desired tone of voice. Examples: “Use a formal, academic tone,” “Use an engaging and accessible tone for undergraduate students,” “Use a concise and technical tone suitable for a scientific publication.”
Other Helpful Tips#
Optional Video: Deep dive into how LLMs can be used
This video dives deep into all the different tools and ways to engage with LLMs. Its very helpful but also long so feel free to jump ahead to the sections that catch your interest.
How to address hallucinations
Large Language Models (LLMs) excel in producing text but can sometimes generate convincingly inaccurate information, a phenomenon known as “hallucinations.” The best way to combat this is use tools trained to minimize hallucinations such as NotebookLM (More on this in AI Tools Database).
However if using this alternative is not possible her are some strategies to achieve more reliable outcomes.
Add Your Own Knowledge Sources: If you integrate your own sources (eg PDFs) into LLMs and ask them to reference from those, you gain more control over the dataset used. While it’s not a perfect solution, this approach significantly reduces hallucinations by grounding the AI’s outputs in trusted and verifiable information. Setting clear boundaries for the AI helps ensure its responses are more accurate and aligned with reliable data.
Utilize Real-Time Information Models: When you need the latest information, standard AI tools might not be enough since they can’t update themselves. That’s where tools like Perplexity or ChatGPT with search features come in handy. These tools pull up-to-date data and often include links to their sources, making it simple for you to verify. This is especially useful for fast-changing topics or things you need to fact-check quickly.
Decompose Complex Tasks: AI works best when given focused tasks. For TU Delft professors, instead of asking for an entire course syllabus in one go, start with smaller requests like, “Summarize the key learning objectives for an introductory python course.” Then, follow up with, “Provide exercises related to variables, loops, and conditionals.” This approach ensures clarity, allows you to verify results step by step, and helps the AI allocate its resources effectively.
More advanced prompting techniques
Advanced prompting methods, such as chain-of-thought prompting and megaprompts, enhance AI accuracy and response structure.
Chain-of-thought prompting guides the AI to reason step-by-step, improving logical clarity and consistency. Learn more about this method here.
Megaprompts, on the other hand, provide detailed and comprehensive instructions to minimize ambiguity and align outputs with specific needs. Explore megaprompts in detail here.
For a broader overview of advanced prompting techniques, visit Prompt Engineering Guide.