Enterprise data with tens of millions of rows, row-level and column-level security, and dozens of datasets spanning multiple business domains need AI-generated answers that are trustworthy, reproducible, and fast, while respecting governance rules consistently. With foundation models (FMs), organizations can build systems that work well for small datasets where a business user asks a question about their data and gets an answer in seconds. Amazon Quick can also help turn your large enterprise data into fast and accurate AI-powered decisions. In this post, you will learn about five new capabilities of Amazon Quick that accelerate how data professionals deliver trusted AI-powered insights at enterprise scale.
Dataset Q&A: Talk to your data directly
When a VP asks, “How is churn trending for this product?“, getting that answer means either finding the right dashboard (if one exists for that exact cut) or waiting for an analyst to write a query and validate the result. The gap between having a question and having a trustworthy answer is measured in hours or days, and it scales with organizational complexity. This means more teams, more datasets, and more questions that nobody pre-built a view for.
Dataset Q&A reduces that gap. You attach one or more datasets to the chat agent, or a Quick Space with mixed assets and ask in natural language. The system generates SQL and executes it across the full dataset (millions of rows with no sampling) to return results in seconds.
Generating SQL from a question is the straightforward part. The more complex issue is everything that informs how that SQL gets written. If the analyst responds back to the VP with a 2.3 percent, the implicit contract is that they understand the computations, filters, time horizon, and any other related context to answer the query. The trust in the answer is because it came from the analyst. The system too resolves ambiguity in the question itself (does “growth” mean transactions, customers, revenue, or units?) It determines the right fields, aggregations, and filters, then applies the business definitions analysts provided through dataset metadata. The result is SQL that aims to reflect your domain’s actual semantics, not a best-guess interpretation of column names.
Amazon Quick applies the row-level and column-level access policies configured against datasets for dashboards to AI-generated queries, scoped to your identity. With this, the security posture that you’ve already built gets applied to conversational answers without additional configuration. The result is that you go from question to verified answer without filing a ticket with business analysts, without waiting for a dashboard update, and without pre-configuration overhead.
Explanations: Verifying the reasoning
Speed is necessary but not sufficient. When computational accuracy in answer matters (it typically does to enterprise analytics), you must see the work. Chat explanations show the full reasoning chain: the tools invoked, the SQL generated, filters applied, assumptions made, and a plain-language summary for non-technical stakeholders.For Business Intelligence engineers and data analysts, this is the development-time accelerant. You ask benchmark questions, inspect the reasoning, curate additional context, and tune guardrails. After you’re confident that the agent handles your domain correctly, you release it to stakeholders. What used to be weeks of iterative testing reduces into focused sessions.
You can learn more in the Dataset Q&A launch post. You can also learn more about how the AWS Technical Field Communities program used Dataset Q&A to improve query accuracy by over 48 percent and reduce resolution time from 90 minutes to under 5 minutes across over 15,000 members in the Beyond BI: How the Dataset Q&A feature of Amazon Quick powers the next generation of data decisions blog post.
Semantic enrichment: Teaching AI your business language
The accuracy of Dataset Q&A depends largely on how well the system understands your business vocabulary. A column called revenue tells an AI nothing about whether that’s gross or net, pre- or post-returns, accrual or cash basis. A column called active_customers doesn’t reveal whether the threshold is 12 months or 24. This isn’t a model intelligence problem, it’s an information problem. Without business context, even the best model generates queries built on assumptions.
Dataset Enrichment gives authors a way to close that gap without the need for a topic configuration or other complex setup. At the dataset level, you provide a plain-language description of what the data represents and free-form instructions that tell the AI how to reason about it. For example, “revenue here is net after returns; for year-over-year, use fiscal_year not calendar_year“. Optionally, you can upload a metadata file from a system of record that you maintain. For example, an existing data catalog or a team wiki. At the column level, you can organize fields into logical folders, add descriptions, and annotate edge cases.
The investment is minutes. The payoff is that when someone asks a question downstream, Quick applies the definitions that your team has already agreed on. For more information, see Dataset Enrichment in the Amazon Quick User Guide.
How Quick finds the right source—and the right approach—for each question
Enriching a single dataset is one piece of the puzzle. In a typical enterprise, you might have access to dozens of datasets and dashboards across sales, operations, finance, HR, and market research. To answer a question presented in natural language, the system must pick the right data assets and the most appropriate agent for the task. Get either wrong, and even the most capable model produces an irrelevant or incorrect answer.
The agentic system in Quick has a semantic layer that searches across your structured assets (dashboards, datasets, or topics) to identify the right source before it constructs a query. It interprets query intent and context to surface results that go beyond straightforward keyword matching. When a user asks about escalations, but the relevant dashboard uses the term tickets, the system can still route to the right asset. It does so by reasoning over the relationships between the query and the content within your data landscape.
For multi-step questions like “How is churn trending, and what’s driving it in the Southeast region?“, the system identifies which specialized agents and tools to invoke at each step. The orchestrator reasons across the full question, plans the sequence, and assembles a coherent answer from multiple capabilities working in concert.
We’ve made improvements to how this layer discovers, ranks, and selects both the right source and the right tool. This is particularly useful in environments where you have access to a wide variety of data across multiple business domains. Better grounding means fewer wrong-source errors, more precise tool selection, and higher confidence in answers, especially for ambiguous or complex questions that span multiple analytical steps.This discovery and orchestration layer is what makes the broader agentic experience reliable at enterprise scale.
AI-powered dashboard generation: From days to minutes
Operational dashboards hold a unique place in enterprise decision-making that conversational tools don’t replace. A well-built dashboard packs leading and lagging indicators, metrics organized by function and time horizon, and filter controls used to slice context into a single shared surface. That density offers a team of decision makers with different roles and different questions, a shared and effective analytical surface that drives alignment and speed. For complex businesses, this shared artifact is how the sum of parts becomes greater than the individual pieces.Conversational analysis tools, like chat, are a great complementary function that stakeholders can use to explore further with complex follow-up questions without burdening the analysts or engineers with unplanned work. But dashboards remain the backbone of operational rhythm.
Constructing a dashboard involves selecting the right visualizations, organizing sheets logically, creating calculated fields, adding filter controls and iterating on the layout. These tasks take days or weeks of skilled manual effort. Much of it is mechanical after the analytical intent is clear. When the author knows what they want, the bottleneck is construction, not thinking.AI powered dashboard generation removes that construction phase. You select up to three datasets and describe what you want to see, for example, the business questions, the metrics, and how you want the information organized. You review an editable plan before anything is built. Amazon Quick then produces multiple organized sheets with visualizations matched to the data, filter controls for stakeholders to slice by different dimensions, and calculated fields like year-over-year growth and month-over-month comparisons. The whole process takes only minutes.
The output is a native Amazon Quick analysis and not a static image or a one-time export. It plugs into existing publishing workflows, embedding patterns, and continuous integration and delivery (CI/CD) pipelines. You can also edit it with point-and-click controls. You can refine visuals after generation. You publish it as a dashboard, share it, embed it, or schedule reports similar to what you would do with one built manually. The semantic curation invested in dataset enrichment influences the quality of the output. With better business context, the initial output is more accurate, leading to less refinement before you’re ready to publish.
During early access, authors reported reducing dashboard creation time by 90 percent or more. You can learn more about the feature and examples in the Generate dashboards from natural language prompts in Amazon Quick post.
Real-time data access: Direct Query on Amazon Simple Storage Service (Amazon S3) Tables
The previously described capabilities (querying, explanations, semantic enrichment, discovery, and dashboard generation) are only as good as the data that they operate on. Enterprises are increasingly building their data estates on open source table formats like Apache Iceberg because of the performance, economics, and flexibility that these formats offer. But there’s been a persistent disconnect. You have to move data into an OLAP layer to analyze that data in a Business Intelligence tool or ask questions about the data through an AI agent. Each hop adds latency, cost, and one more place where freshness can degrade.
We’ve removed that hop. Amazon Quick can now connect directly to Apache Iceberg tables sitting in S3 Table buckets without the need for an intermediate engine. The data lake becomes the analytics-ready source.
Authors can choose between SPICE mode (for high-concurrency, sub-second dashboards) or Direct Query mode (for scenarios where freshness matters most). In Direct Query mode, both traditional dashboards and conversational AI agents are reading from the same live data. You can see a transaction in a chart, metric, or a chat answer moments after it lands in an S3 Table bucket using a streaming pipeline.
For organizations adopting modern lake-first architectures, this is a meaningful simplification: one governed data layer serving both human and AI consumers, with no replication or orchestration in between. Learn more and see a detailed example implementation in our post, From data lake to AI-ready analytics: Introducing new data source with S3 Tables in Amazon Quick.
Built for the humans who build analytics
These capabilities remove toil from the analytics workflow without removing the humans who provide judgment, governance, and domain expertise. The business analyst still decides how churn should be computed. The data engineer still governs who can access what. The business analyst still curates the experience for their stakeholders.Enterprise data architectures are complex and varied. We understand that no single prescriptive workflow is universal. What we’re building is a rich set of tools and capabilities, each valuable on its own, built on common tenets of governance in a low-maintenance managed service, that collectively accelerate how enterprises put AI to work within their data landscape. What changes is the effort required to translate expertise into working, trusted analytics at scale. Where it once took days to build a dashboard, it now takes minutes. Where verifying an AI answer required re-running queries manually, it now takes a selection. Where importing business context meant deliberate topic configuration, it now takes a file upload.
These capabilities are available today in all AWS Regions where Amazon Quick is available.
About the authors
from Artificial Intelligence https://ift.tt/xGjtfCL