Large Language Models (LLM) have become the state–of–the-art for chatbot development with unprecedented performance. With the rise of LLMs, there has been a need to develop this technology within the constraints of both practical and corporate requirements and needs. For our network data chatbot, we have the practical need to chat with our network data being collected by our backend services and accessed via REST APIs as JSON data. Other practical needs include minimizing hallucinationswhich are false responses, maximizing deterministic responses, and fitting prompts within the LLM’s maximum context window length. In addition, corporate requirements dictate that we have sandboxed LLMs to isolate this internal data from external access. To meet these requirements, we have implemented a Retrieval Augmented Generation (RAG) framework based on the LangChain orchestrator that runs an LLM agent. The agent asks the LLM to generate formatted JSON to invoke tool functions that make up the semantic layer that connects meaning to action. It has been implemented to retrieve relevant network data in JSON snippets stored in a knowledge graph database that also holds the network relationships. Ultimately, the LLM agent will include the JSON snippets as context in further LLM prompts to get the answer that the user is asking for. This paper will discuss how we met the needs and requirements in our network data chatbot.