7-Eleven’s Information Documentation Dilemma
7-Eleven’s knowledge ecosystem is huge and sophisticated, housing hundreds of tables with lots of of columns throughout our Databricks setting. This knowledge varieties the spine of our operations, analytics and decision-making processes. Historically, 7-Eleven’s knowledge dictionary and documentation lived in Confluence pages, meticulously maintained by our knowledge crew members who would manually doc desk and column definitions.
We confronted a vital roadblock as we started exploring the AI-powered options on the Databricks Information Intelligence Platform, together with AI/BI Genie, clever dashboards and different functions. These superior instruments rely closely on desk metadata and feedback embedded instantly inside Databricks to generate insights, reply questions on our knowledge, and construct automated visualizations. With out correct desk and column feedback in Databricks itself, we had been basically leaving highly effective AI capabilities on the desk. For instance, when Genie lacks column definitions, it may possibly misread the which means of bespoke columns, requiring finish customers to make clear. As soon as we enriched our metadata, Genie’s contextual understanding improved dramatically—precisely figuring out column functions, surfacing the fitting tables in response to pure language queries, and producing much more related and actionable insights. Merely put, Genie, like all AI brokers, will get extra considerate and extra useful when it has higher metadata to work with.
The hole between our well-documented Confluence pages and our “metadata-light” Databricks setting was stopping us from realizing the total potential of our knowledge platform funding.
Guide Migration’s Inconceivable Scale
After we initially thought of migrating our documentation from Confluence to Databricks, the size of the problem grew to become instantly obvious. With hundreds of tables containing lots of of columns every, a handbook migration would require:
- Time-intensive labor: A whole bunch of person-hours to repeat and paste documentation
- Guide metadata updates: Crafting hundreds of particular person SQL statements to replace metadata or going to every desk UI
- Challenge oversight: Implementing a monitoring system to make sure all tables had been correctly up to date
- High quality assurance: Making a validation course of to catch inevitable human errors
- Ongoing maintenance: Establishing an ongoing upkeep protocol to maintain each techniques in sync
Human error can be unavoidable even when we devoted important sources to this effort. Some tables can be missed, feedback can be incorrectly formatted, and the method would probably have to be repeated as documentation advanced. Furthermore, the tedious nature of the work probably results in inconsistent high quality throughout the documentation.
Most regarding was the chance price. Whereas our knowledge crew centered on this migration, they couldn’t work on higher-value initiatives. Day-after-day, we confronted delays in strengthening our Databricks metadata, leaving untapped potential within the AI/BI capabilities already at our fingertips.
The Clever Doc Processing Pipeline
To resolve this problem, 7-Eleven developed a classy agentic AI workflow powered by Llama 4 Maverick, deployed by Mosaic AI Mannequin Serving, that automated the complete documentation migration course of by an clever multistage pipeline:
- Discovery part: The agent makes use of Databricks APIs to get all tables, desk names and column constructions.
- Doc retrieval: The agent pulls all related knowledge dictionary paperwork from Confluence, making a corpus of potential documentation sources.
- Reranking and filtering: Implementing superior reranking algorithms, the system prioritizes essentially the most related documentation for every desk, filtering out noise and irrelevant content material. This vital step ensures we match tables with their correct documentation even when naming conventions aren’t completely constant.
- Clever matching: For every Databricks desk, the AI agent analyzes potential documentation matches, utilizing contextual understanding to find out the proper Confluence web page even when names don’t match precisely.
- Focused extraction: As soon as the proper documentation is recognized, the agent intelligently extracts related descriptions for each tables and their columns, preserving the unique which means whereas formatting appropriately for Databricks metadata.
- SQL technology: The system routinely generates correctly formatted SQL statements to replace the Databricks desk and column feedback, dealing with particular characters and formatting necessities.
- Execution and verification: The agent runs the SQL updates and, by MLflow monitoring and analysis, verifies that metadata was utilized appropriately, logs outcomes, and surfaces any points for human evaluate.
- Monitoring and insights: The crew additionally makes use of the AI/BI Genie Dashboard to trace undertaking metrics in actual time, making certain transparency, high quality management, and steady enchancment.
This clever pipeline reworked months of tedious, error-prone work into an automatic course of that accomplished the preliminary migration in days. The system’s skill to grasp context and make clever matches between otherwise named or structured sources was key to attaining excessive accuracy.
Since implementing this answer, we plan emigrate documentation for over 90% of our tables, unlocking the total potential of Databricks’ AI/BI options. What started as a calmly used AI assistant has advanced into an on a regular basis software in our knowledge workflows.. Genie’s skill to grasp context now mirrors how a human would interpret the info, because of the column-level metadata we injected. Our knowledge scientists and analysts can now use pure language queries by AI/BI Genie to discover knowledge, and our dashboards leverage the wealthy metadata to offer extra significant visualizations and insights.
The answer continues to offer worth as an ongoing synchronization software, making certain that as our documentation evolves in Confluence, these adjustments are mirrored in our Databricks setting. This undertaking demonstrated how thoughtfully utilized AI brokers can remedy complicated knowledge governance challenges at enterprise scale, turning what appeared like an insurmountable documentation job into a sublime automated answer.
Wish to study extra about AI/BI and the way it will help unlock worth out of your knowledge? Study extra right here.