QuizBytesDaily← All quizzes
HomeQuizAI & Machine Learning
🤖

AI & Machine Learning Quiz Questions

From neural networks to LLMs and RAG pipelines

Take AI & Machine Learning Quizzes →

About AI & Machine Learning Quizzes

Artificial intelligence and machine learning have undergone a revolution in the last few years. Large Language Models (LLMs) like GPT-4 and Claude, diffusion models for images, and retrieval-augmented generation (RAG) pipelines have moved from research curiosities to production workhorses. Understanding how these systems work — not just how to call an API — is now a core skill for software engineers.

QuizBytesDaily's AI/ML quizzes blend classical ML theory with practical modern AI engineering. You'll encounter questions on gradient descent and backpropagation alongside questions on attention mechanisms, tokenisation, context windows, and prompt engineering. We cover PyTorch fundamentals, embedding spaces, vector databases, and evaluation metrics like BLEU, ROUGE, and F1.

Whether you're a data scientist brushing up on fundamentals, a backend engineer integrating LLM APIs, or an ML engineer fine-tuning models, our quizzes will surface the concepts you use every day and the edge cases that matter in production.

Why Learn AI & Machine Learning?

Career-defining skill
AI/ML engineering is the fastest-growing field in software. Every company is integrating AI — engineers who understand the internals are far more effective than those who only use SDKs.
Research meets practice
Modern AI sits at the intersection of math, statistics, and software engineering. Understanding both theory and implementation makes you a more complete practitioner.
Move fast safely
Knowing how models fail — hallucination, context overflow, embedding drift — helps you build reliable AI features, not just demos.

What You'll Find in Our AI & Machine Learning Quizzes

  • Neural network fundamentals (layers, activations, loss functions)
  • Transformer architecture and attention mechanisms
  • LLM concepts: tokenisation, context windows, temperature, top-p
  • RAG pipelines: chunking, embedding, retrieval, reranking
  • Model evaluation metrics (accuracy, F1, BLEU, ROUGE, perplexity)
  • PyTorch tensors, autograd, and training loops
  • Prompt engineering patterns and few-shot learning
Sponsored

Other Quiz Categories

🐍 Python🧮 Algorithms JavaScript🏗️ System Design📘 TypeScript

Sample AI & Machine Learning Quiz Questions

A taste of what you'll find in the AI & Machine Learning quiz series.

1

In a large language model, what does 'temperature' control?

A.The computational speed of token generation
B.The randomness and creativity of the output distribution✓ Correct
C.The maximum number of tokens in the response
D.The size of the model's context window

Explanation: Temperature scales the logits before applying softmax to produce the next-token probability distribution. A temperature of 0 makes the model always pick the highest-probability token (deterministic, repetitive). Higher temperatures flatten the distribution, making lower-probability tokens more likely, producing more varied and creative — but less predictable — output. Typical production values range from 0.1 (factual tasks) to 1.0 (creative tasks).

2

What is the primary purpose of the 'retrieval' step in a RAG (Retrieval-Augmented Generation) pipeline?

A.To fine-tune the language model on domain-specific data
B.To compress the prompt to fit within the context window
C.To fetch relevant document chunks from a vector store to augment the LLM prompt✓ Correct
D.To cache previous responses to avoid redundant API calls

Explanation: RAG separates knowledge storage from the language model. At query time, the user's question is embedded into a vector, and a similarity search retrieves the most relevant document chunks from a vector database (e.g. Pinecone, pgvector, Chroma). These chunks are injected into the LLM's prompt as context, allowing the model to answer questions about private or up-to-date data it was never trained on.

3

Which activation function is most commonly used in hidden layers of modern deep neural networks?

A.Sigmoid
B.Tanh
C.ReLU (Rectified Linear Unit)✓ Correct
D.Softmax

Explanation: ReLU (f(x) = max(0, x)) replaced sigmoid and tanh as the dominant hidden-layer activation because it does not suffer from the vanishing gradient problem for positive inputs, is computationally cheap (a single comparison), and produces sparse activations that improve generalisation. Softmax is used in the output layer for multi-class classification. Sigmoid is still used for binary output gates (e.g. LSTM) but rarely in hidden layers.

4

What is 'hallucination' in the context of large language models?

A.When the model generates an unusually long response
B.When the model confidently produces factually incorrect or fabricated information✓ Correct
C.When the model refuses to answer a question
D.When the model repeats the same phrase multiple times

Explanation: LLM hallucination refers to the model generating text that sounds confident and fluent but is factually wrong or entirely fabricated — citing non-existent papers, inventing historical events, or producing plausible-looking but incorrect code. It happens because LLMs are trained to produce statistically likely text, not to verify facts. Mitigations include grounding the model with RAG, using lower temperature, and adding explicit fact-checking steps.

Sponsored

Ready to test your AI & Machine Learning knowledge?

Free daily quizzes — no signup, no account, no cost.

Start AI & Machine Learning Quizzes →