Project Title: AI API Platform & Chatbot
Parisi Speed School
| Details | |
|---|---|
| Project Title | AI API Platform & Chatbot |
| Project Topics | Artificial Intelligence & Machine Learning Budgeting, Forecasting, and Cost Optimization |
| Skills & Expertise | |
| Project Synopsis: Challenge/Opportunity | Build an AI-powered knowledge API that allows applications to answer questions using approved internal documents. The platform should combine retrieval over indexed source content with a configurable large language model layer for answer generation, while remaining vendor- and model-agnostic. Many organizations have valuable knowledge spread across internal documents, manuals, playbooks, and reference materials. That knowledge is useful, but it is often difficult to search quickly and reliably inside day-to-day workflows. This project proposes a Retrieval-Augmented Generation (RAG) API that can ingest many kinds of documents, convert them into a searchable knowledge base, retrieve relevant context at query time, and use a configurable large language model to generate grounded responses for chatbot experiences in WordPress, custom web applications, and future mobile or multimodal interfaces. Internal knowledge is difficult to operationalize because: • Important guidance is spread across multiple documents and formats. • Teams cannot search internal materials quickly enough in live workflows. • Response quality can vary based on experience and familiarity with the documents. • New users need time to find and learn the right material. |
| Project Synopsis: Activities/Actions Required | The initial version will be a chat-based system backed by an API that retrieves relevant information from approved materials and generates structured, grounded responses. For the business team, the focus is on evaluating platform and tooling options, analyzing cost vs. scalability tradeoffs, and identifying practical paths for real-world use and future expansion. This includes considering how the system could evolve into broader applications (e.g., mobile, voice, or other interfaces), as well as outlining risks and operational considerations. |
| Project Synopsis: Expected Results | The MVP is a chat-based assistant backed by a reusable RAG API. End users ask questions in plain language, and the system retrieves relevant passages from approved source documents before passing that context to a large language model that generates a grounded answer. The MVP should: • Answer domain-specific questions using approved internal materials only. • Return source references with each answer. • Support a configurable LLM layer so the implementation can use whichever model is best suited for the use case, cost profile, privacy requirements, or business domain. • Expose a simple API that can be consumed by a WordPress chatbot or website widget or any chat front end application (web or mobile). • Support an admin workflow for adding or updating source documents if time allows. • Be designed so additional clients, channels, or future AI interfaces can reuse the same backend knowledge layer. Included in Phase 1 • Document ingestion for PDFs, Word documents, or exported text files. • Chunking and indexing of source content for semantic retrieval. • LLM-backed answer generation using retrieved context. • Chat API that accepts a question and returns an answer with cited sources. • WordPress-facing integration layer or simple chatbot frontend. • Basic evaluation workflow to test answer quality against known questions. Functional Requirements FR-1 Document Ingestion The system must ingest approved source materials and convert them into searchable chunks with metadata. FR-2 Grounded Retrieval The system must retrieve relevant passages before generating an answer. FR-3 Chat API The system must expose an API endpoint that accepts a question and returns: • A text answer • Source citations • Optional confidence or retrieval metadata FR-4 Configurable Model Layer The system must support a configurable LLM provider or model selection layer so the implementation can remain flexible across domains, costs, and deployment constraints. FR-5 WordPress Integration The system must be easy to connect to a WordPress site or plugin without requiring a custom frontend application. FR-6 Admin Update Path The system should support document re-ingestion so the knowledge base can be updated over time. FR-7 Safe Failure Behavior If the system cannot find strong support in the source materials, it should respond conservatively instead of inventing an answer. 8. Non-Functional Requirements • Answers should be grounded in approved source materials. • The platform should remain LLM-agnostic so model choices can evolve without major architectural rewrites. • The system should be simple enough for a student team to understand and maintain. • The architecture should be deployable using low-overhead cloud services. • The API should support authentication so the knowledge base is not publicly exposed without controls. • The system should preserve document traceability so reviewers can inspect where answers came from.
9. Success Criteria The MVP will be considered successful if: • Users can ask realistic domain questions and receive relevant answers. • Answers consistently cite the supporting document sections used for retrieval. • The system declines or hedges when source support is weak. • A WordPress-based interface can call the API successfully. • A student team can explain, run, and extend the solution without excessive platform complexity. 10. Assumptions • The core platform can be built and tested using sample or placeholder documents before production content is available. • Approved domain-specific documents will be needed later for tuning, evaluation, and final validation. • The initial use case is internal or controlled knowledge access, not a fully open consumer product. • A domain reviewer should validate outputs once real source materials are introduced. • The first version should optimize for clarity, maintainability, and platform flexibility over advanced AI features. |
Project Timeline
| Touchpoints & Assignments | Date | Type |
|---|---|---|
|
Academic Calendar |
May 19 2026 | Other |
|
Summer 2026 Midterm Student Evaluation |
Jul 10 2026, 23:59 PM EST (UTC-05:00) | Evaluation |
|
Summer 2026 Midterm Presentations |
Jul 10 2026, 23:59 PM EST (UTC-05:00) | Action Item |
|
Summer 2026 Final Student Self Reflection |
Aug 27 2026, 23:59 PM EST (UTC-05:00) | Evaluation |
|
Summer 2026 Final Presentations |
Aug 27 2026, 23:59 PM EST (UTC-05:00) | Action Item |
Teams
| Team Name | Project Name | Team Members |
|---|---|---|
| No Teams Available |
.jpg)