Skip to main content

Azure AI-900 prep, AI fundamentals roadmap with ARIA

The Microsoft Azure AI Fundamentals exam (code AI-900) is a 60-minute, roughly 40-question multiple-choice test with a 700 of 1000 passing score (about 70 percent), no prerequisites, and no labs. Generative AI is now 25 percent of the scored content on the current 2026 version, which is a recent and material shift. ARIA preps you for it with an adaptive roadmap built from a CAT evaluation, a daily task engine, and a pass guarantee tied to five measurable conditions. You can start your AI-900 roadmap in about five minutes.

TL;DR

  • AI-900 is 60 minutes, about 40 questions, 700 of 1000 to pass (roughly 70 percent), beginner level, no prerequisites, current as of 2026.
  • Five domains: AI Workloads and Considerations 20%, Machine Learning Principles 25%, Computer Vision 15%, NLP Workloads 15%, Generative AI 25%. Generative AI is the largest single domain and a recent uplift on this exam.
  • ARIA's CAT evaluation converges in 15 to 25 questions and produces a per-domain skill map before any roadmap is generated.
  • Most working professionals hit the pass-guarantee threshold (readiness 80, two mocks at 70 percent plus, one gauntlet at 80 percent plus) in 2 to 4 weeks at 30 minutes a day.
  • Bottom line: prep matches your gaps across the five domains, with extra weight on Generative AI and ML Principles where most learners arrive cold, and the pass guarantee refunds the Exam Ready plan in full if you complete the conditions and still fail.

What the AI-900 exam is

AI-900 is the entry-level Microsoft Azure AI certification, current as of 2026. It validates that you can describe common AI workloads, the building blocks of machine learning, and the Azure AI service catalog at a working level. Concept-only. No code, no labs, no math, no model training. That is what makes it the standard on-ramp for product managers, analysts, designers, recent grads, and engineers crossing over from a non-AI background.

The format: about 40 scored and unscored questions in 60 minutes, all multiple choice or multiple response. Microsoft reports the result on a scaled 1 to 1000 range, and 700 is the line, which maps to roughly 70 percent correct.

DomainWeightWhat it covers
AI Workloads and Considerations20%Common AI workload categories, responsible AI principles (fairness, reliability, privacy, inclusiveness, transparency, accountability)
Machine Learning Principles25%Regression, classification, clustering, supervised vs unsupervised vs reinforcement, Azure Machine Learning capabilities
Computer Vision15%Image classification, object detection, OCR, facial recognition, Azure AI Vision, Custom Vision, Document Intelligence, Face service
NLP Workloads15%Key phrase extraction, sentiment, entity recognition, translation, speech-to-text and back, Azure AI Language, Speech, Translator
Generative AI25%Transformer basics, prompt engineering, grounding vs hallucination, Copilot vs Azure OpenAI Service vs Azure AI Foundry

Generative AI at 25 percent is the headline change on the current 2026 version. Earlier forms put it inside a smaller bucket; today it is the largest single domain, and most legacy study material under-weights it. ARIA allocates milestones in proportion to your gaps inside the AI-900 weights, not the older syllabus.

If you have looked at AWS AI Practitioner (AIF-C01), AI-900 sits at the same altitude. Both are concept-only, beginner-tier, and reward scenario routing over hands-on skill. AI-900 leans harder on Azure service names and on responsible AI as Microsoft's six pillars. For the broader Azure on-ramp without the AI focus, see AZ-900.

How ARIA preps you for it

Five pieces, in this order, every time.

The CAT evaluation. I open with a computerized adaptive test that converges in 15 to 25 questions. Difficulty moves up when you answer correctly and down when you do not. The output is a per-domain skill estimate at the Novice, Familiar, Proficient, or Expert level. That estimate is what every later step is built from.

The personalized roadmap. Once the eval closes, I generate a 3 to 5 phase roadmap sized to your gaps. Novice domains get the most milestones; Proficient domains get the fewest. For AI-900 that usually means heavier weight on Generative AI and ML Principles if you arrive cold, and a lighter touch on Computer Vision or NLP if you have used those services in product work.

The daily task engine. Every time you reopen the app, get_today_task() runs and surfaces one thing in the Today Task card. One task, not a reading list. Roadmap tasks advance milestones and count toward the pass guarantee; free-play tasks do not.

The error backlog with concept-confusion categorization. Every wrong answer goes into a backlog tagged with the specific confusion behind it (for example, "confused Custom Vision with Document Intelligence" or "missed grounding vs prompt engineering boundary"). I bring the right item back at the right interval, and I cluster related confusions so a single review session collapses three near-miss errors at once.

The readiness score. A single 0 to 100 number that updates after every roadmap session. It decays when you go quiet, which is the honest signal: a score from three weeks ago does not predict tomorrow's exam. Hitting 80 and holding it is the threshold for the pass guarantee.

For the wider context on adaptive systems versus chatbots and question generators, the AI cert prep guide walks the four tiers. On AI-900 the difference is sharp: a generic plan spends a week re-explaining what AI is, while ARIA spends that week on the generative AI and service-routing scenarios that actually carry exam weight.

Common pitfalls on AI-900

These are the traps I see most often on this exam, and what I do about each.

Supervised vs unsupervised vs reinforcement at the example level. The definitions are easy. The exam never asks for definitions. It hands you a scenario (predict next quarter's churn from labeled history; group customers into segments without labels; train a robot arm by reward) and forces you to pick. ARIA quizzes you on scenarios, not on terms. I keep three near-miss examples in your backlog until you route them cold.

Computer Vision vs Document Intelligence vs Custom Vision. Three services with overlapping silhouettes. Azure AI Vision is general-purpose: tagging, OCR, image analysis on common categories. Custom Vision is for image classification or object detection on your own labels (logos, defects, custom species). Document Intelligence (formerly Form Recognizer) is for structured extraction from receipts, invoices, IDs, and forms. The exam tests the boundary, not the headline use case. I drill you on what to pick when the input is "scanned receipts" versus "factory-floor defect photos" versus "stock product images".

Which Azure AI service does what across Speech, Translator, Language, and CLU. Four NLP-adjacent services that learners blend together. Azure AI Speech does speech-to-text, text-to-speech, and speaker recognition. Translator does text and document translation across 100 plus languages. Azure AI Language covers sentiment, key phrases, entity recognition, summarization, and PII detection. Conversational Language Understanding (CLU) is the modern replacement for LUIS and handles intent and entity recognition for chat or voice apps. The exam loves to ask which service you would use for a specific scenario; ARIA quizzes you on triggers, not service descriptions.

Responsible AI principles as scenario routing. Microsoft's six pillars (fairness, reliability and safety, privacy and security, inclusiveness, transparency, accountability) show up as scenarios. A facial recognition system that misidentifies one demographic at a higher rate is a fairness violation. A medical diagnosis model that does not document its training data is a transparency violation. A chatbot that stores user prompts without consent is a privacy violation. Six pillars, six scenario shapes. I drill you on the mapping until you do not have to think about it.

Generative AI specifics: prompt engineering, grounding, and Copilot vs Azure OpenAI vs Foundry. This is the biggest weight on the exam and the area legacy study material covers worst. Prompt engineering is shaping the input to get a better output (clear instructions, few-shot examples, role framing). Grounding is feeding the model your own data at inference time (retrieval-augmented generation, attached files, structured context) to reduce hallucination. The three product names trip everyone: Microsoft Copilot is the user-facing assistant across Microsoft 365 and Windows; Azure OpenAI Service is the API to call OpenAI models from your own app with Azure-managed keys and quotas; Azure AI Foundry is the unified developer portal for building, testing, and deploying generative AI apps on Azure. ARIA isolates each pair (Copilot vs Foundry, Azure OpenAI vs Foundry, prompt engineering vs grounding) until the boundaries stick.

Regression vs classification vs clustering by use case. Predict a number (regression: house price, next-month sales). Predict a category (classification: spam or not, customer churn yes or no). Group without labels (clustering: customer segments, anomaly groups). The trap is that the question phrasing dresses the same shape in different language, and a learner who memorized definitions misroutes the scenario. I quiz you on rephrased scenarios and on near-miss pairs (binary classification vs two-cluster clustering, for example) so the routing becomes automatic.

Common Questions

Do I need a machine learning or coding background for AI-900?

No. AI-900 is concept-only and assumes zero ML, data science, or programming background. The exam tests whether you can describe AI workloads, recognize the right Azure AI service for a scenario, and reason about responsible AI principles. ARIA's CAT evaluation will tell you exactly where you stand on each of the five domains before any roadmap is written.

How are generative AI questions framed on AI-900?

Generative AI is now 25 percent of the AI-900 exam, the largest single domain. Questions are scenario-based: describe what a transformer does at a working level, identify when to use prompt engineering versus grounding, recognize hallucination risk, and route a use case to Copilot, Azure OpenAI Service, or Azure AI Foundry. No coding, no model tuning, no math. The shift to 25 percent is recent enough that older AI-900 study material under-weights it; check that whatever you use was updated for the current 2026 form.

How long should I study for AI-900 at 30 minutes a day?

Most working professionals finish the AI-900 roadmap in 2 to 4 weeks at 30 minutes a day. ARIA sizes the plan to your CAT evaluation, so a stronger baseline shortens it and a weaker one lengthens it. The Today Task card is the only thing you need to open each day.

AI-900 vs AZ-900, which one should I take first?

Take whichever maps to the work you actually do. AZ-900 is the broader Azure on-ramp covering cloud concepts, services, and governance. AI-900 is narrower and focuses on AI and ML workloads on Azure. If you have no Azure exposure at all, AZ-900 first gives you the platform vocabulary that makes AI-900 land faster, but the two exams do not depend on each other and many learners pass AI-900 first because the topic is closer to current product work.

What does AI-900 unlock for AI-102 and the next Microsoft AI certs?

AI-900 is not a hard prerequisite for AI-102 (Azure AI Engineer Associate), but it is the recommended on-ramp because it covers the service catalog AI-102 then asks you to use in code. After AI-900, common next steps are AI-102 for engineers who will build with Azure AI services, DP-100 for data scientists who will design and run ML workloads, and the broader Fundamentals stack of DP-900, AZ-900, and SC-900 for adjacent role coverage.

What are the refund conditions if I do not pass AI-900?

If you complete every milestone, pass two mock exams at 70 percent or higher, pass one gauntlet at 80 percent or higher, hit readiness 80 plus, sit the exam inside the 60-day window, and still fail, you get a full refund of the Exam Ready plan. The full breakdown of conditions lives on the pass guarantee page.

Start your AI-900 roadmap

The cheapest signal on this exam is a real evaluation against your actual baseline. Five minutes for the entry, fifteen for the full diagnostic, then a roadmap sized to your gaps rather than to a generic Microsoft Learn module list, with the right weight on the Generative AI domain that the current form actually tests.

Start your AI-900 roadmap. I will run the CAT eval, write your phases, pick your day-1 task, and stay with you to exam day.