About AI & Machine Learning Quizzes
Artificial intelligence and machine learning have undergone a revolution in the last few years. Large Language Models (LLMs) like GPT-4 and Claude, diffusion models for images, and retrieval-augmented generation (RAG) pipelines have moved from research curiosities to production workhorses. Understanding how these systems work — not just how to call an API — is now a core skill for software engineers.
QuizBytesDaily's AI/ML quizzes blend classical ML theory with practical modern AI engineering. You'll encounter questions on gradient descent and backpropagation alongside questions on attention mechanisms, tokenisation, context windows, and prompt engineering. We cover PyTorch fundamentals, embedding spaces, vector databases, and evaluation metrics like BLEU, ROUGE, and F1.
Whether you're a data scientist brushing up on fundamentals, a backend engineer integrating LLM APIs, or an ML engineer fine-tuning models, our quizzes will surface the concepts you use every day and the edge cases that matter in production.
Why Learn AI & Machine Learning?
What You'll Find in Our AI & Machine Learning Quizzes
- ✓Neural network fundamentals (layers, activations, loss functions)
- ✓Transformer architecture and attention mechanisms
- ✓LLM concepts: tokenisation, context windows, temperature, top-p
- ✓RAG pipelines: chunking, embedding, retrieval, reranking
- ✓Model evaluation metrics (accuracy, F1, BLEU, ROUGE, perplexity)
- ✓PyTorch tensors, autograd, and training loops
- ✓Prompt engineering patterns and few-shot learning
Other Quiz Categories
Sample AI & Machine Learning Quiz Questions
A taste of what you'll find in the AI & Machine Learning quiz series.
In a large language model, what does 'temperature' control?
Explanation: Temperature scales the logits before applying softmax to produce the next-token probability distribution. A temperature of 0 makes the model always pick the highest-probability token (deterministic, repetitive). Higher temperatures flatten the distribution, making lower-probability tokens more likely, producing more varied and creative — but less predictable — output. Typical production values range from 0.1 (factual tasks) to 1.0 (creative tasks).
What is the primary purpose of the 'retrieval' step in a RAG (Retrieval-Augmented Generation) pipeline?
Explanation: RAG separates knowledge storage from the language model. At query time, the user's question is embedded into a vector, and a similarity search retrieves the most relevant document chunks from a vector database (e.g. Pinecone, pgvector, Chroma). These chunks are injected into the LLM's prompt as context, allowing the model to answer questions about private or up-to-date data it was never trained on.
Which activation function is most commonly used in hidden layers of modern deep neural networks?
Explanation: ReLU (f(x) = max(0, x)) replaced sigmoid and tanh as the dominant hidden-layer activation because it does not suffer from the vanishing gradient problem for positive inputs, is computationally cheap (a single comparison), and produces sparse activations that improve generalisation. Softmax is used in the output layer for multi-class classification. Sigmoid is still used for binary output gates (e.g. LSTM) but rarely in hidden layers.
What is 'hallucination' in the context of large language models?
Explanation: LLM hallucination refers to the model generating text that sounds confident and fluent but is factually wrong or entirely fabricated — citing non-existent papers, inventing historical events, or producing plausible-looking but incorrect code. It happens because LLMs are trained to produce statistically likely text, not to verify facts. Mitigations include grounding the model with RAG, using lower temperature, and adding explicit fact-checking steps.
Ready to test your AI & Machine Learning knowledge?
Free daily quizzes — no signup, no account, no cost.
Start AI & Machine Learning Quizzes →