Saturday, February 7, 2026
HomeHealthcareMaking a NetAI Playground for Agentic AI Experimentation

Making a NetAI Playground for Agentic AI Experimentation

Hey there, everybody, and welcome to the most recent installment of “Hank shares his AI journey.” 🙂 Synthetic Intelligence (AI) continues to be all the fad, and getting back from Cisco Reside in San Diego, I used to be excited to dive into the world of agentic AI.

With bulletins like Cisco’s personal agentic AI resolution, AI Canvas, in addition to discussions with companions and different engineers about this subsequent part of AI potentialities, my curiosity was piquedWhat does this all imply for us community engineers? Furthermore, how can we begin to experiment and study agentic AI?

I started my exploration of the subject of agentic AI, studying and watching a variety of content material to realize a deeper understanding of the topic. I gained’t delve into an in depth definition on this weblog, however listed here are the fundamentals of how I give it some thought:

Agentic AI is a imaginative and prescient for a world the place AI doesn’t simply reply questions we ask, but it surely begins to work extra independently. Pushed by the targets we set, and using entry to instruments and techniques we offer, an agentic AI resolution can monitor the present state of the community and take actions to make sure our community operates precisely as meant.

Sounds fairly darn futuristic, proper? Let’s dive into the technical points of the way it works—roll up your sleeves, get into the lab, and let’s be taught some new issues.

What are AI “instruments?”

The very first thing I needed to discover and higher perceive was the idea of “instruments” inside this agentic framework. As you could recall, the LLM (giant language mannequin) that powers AI techniques is basically an algorithm skilled on huge quantities of knowledge. An LLM can “perceive” your questions and directions. On its personal, nonetheless, the LLM is restricted to the information it was skilled on. It may well’t even search the online for present film showtimes with out some “software” permitting it to carry out an internet search.

From the very early days of the GenAI buzz, builders have been constructing and including “instruments” into AI functions. Initially, the creation of those instruments was advert hoc and different relying on the developer, LLM, programming language, and the software’s purpose.  However just lately, a brand new framework for constructing AI instruments has gotten numerous pleasure and is beginning to turn into a brand new “commonplace” for software growth.

This framework is called the Mannequin Context Protocol (MCP). Initially developed by Anthropic, the corporate behind Claude, any developer to make use of MCP to construct instruments, referred to as “MCP Servers,” and any AI platform can act as an “MCP Consumer” to make use of these instruments. It’s important to keep in mind that we’re nonetheless within the very early days of AI and AgenticAI; nonetheless, at present, MCP seems to be the strategy for software constructing. So I figured I’d dig in and determine how MCP works by constructing my very own very fundamental NetAI Agent.

I’m removed from the primary networking engineer to wish to dive into this area, so I began by studying a few very useful weblog posts by my buddy Kareem Iskander, Head of Technical Advocacy in Study with Cisco.

These gave me a jumpstart on the important thing matters, and Kareem was useful sufficient to offer some instance code for creating an MCP server. I used to be able to discover extra alone.

Creating an area NetAI playground lab

There isn’t a scarcity of AI instruments and platforms at present. There may be ChatGPT, Claude, Mistral, Gemini, and so many extra. Certainly, I make the most of a lot of them repeatedly for varied AI duties. Nonetheless, for experimenting with agentic AI and AI instruments, I needed one thing that was 100% native and didn’t depend on a cloud-connected service.

A major cause for this want was that I needed to make sure all of my AI interactions remained fully on my pc and inside my community. I knew I might be experimenting in a wholly new space of growth. I used to be additionally going to ship knowledge about “my community” to the LLM for processing. And whereas I’ll be utilizing non-production lab techniques for all of the testing, I nonetheless didn’t like the thought of leveraging cloud-based AI techniques. I might really feel freer to be taught and make errors if I knew the danger was low. Sure, low… Nothing is totally risk-free.

Fortunately, this wasn’t the primary time I thought-about native LLM work, and I had a few potential choices able to go. The primary is Ollama, a strong open-source engine for operating LLMs domestically, or no less than by yourself server.  The second is LMStudio, and whereas not itself open supply, it has an open supply basis, and it’s free to make use of for each private and “at work” experimentation with AI fashions. After I learn a current weblog by LMStudio about MCP assist now being included, I made a decision to present it a attempt for my experimentation.

Creating Mr Packets with LMStudio
Creating Mr Packets with LMStudio

LMStudio is a consumer for operating LLMs, but it surely isn’t an LLM itself.  It supplies entry to a lot of LLMs out there for obtain and operating. With so many LLM choices out there, it may be overwhelming while you get began. The important thing issues for this weblog publish and demonstration are that you simply want a mannequin that has been skilled for “software use.” Not all fashions are. And moreover, not all “tool-using” fashions truly work with instruments. For this demonstration, I’m utilizing the google/gemma-2-9b mannequin. It’s an “open mannequin” constructed utilizing the identical analysis and tooling behind Gemini.

The subsequent factor I wanted for my experimentation was an preliminary thought for a software to construct. After some thought, I made a decision “good day world” for my new NetAI undertaking could be a manner for AI to ship and course of “present instructions” from a community machine. I selected pyATS to be my NetDevOps library of selection for this undertaking. Along with being a library that I’m very aware of, it has the advantage of automated output processing into JSON via the library of parsers included in pyATS. I might additionally, inside simply a few minutes, generate a fundamental Python operate to ship a present command to a community machine and return the output as a place to begin.

Right here’s that code:

def send_show_command(
    command: str,
    device_name: str,
    username: str,
    password: str,
    ip_address: str,
    ssh_port: int = 22,
    network_os: Non-compulsory[str] = "ios",
) -> Non-compulsory[Dict[str, Any]]:

    # Construction a dictionary for the machine configuration that may be loaded by PyATS
    device_dict = {
        "gadgets": {
            device_name: {
                "os": network_os,
                "credentials": {
                    "default": {"username": username, "password": password}
                },
                "connections": {
                    "ssh": {"protocol": "ssh", "ip": ip_address, "port": ssh_port}
                },
            }
        }
    }
    testbed = load(device_dict)
    machine = testbed.gadgets[device_name]

    machine.join()
    output = machine.parse(command)
    machine.disconnect()

    return output

Between Kareem’s weblog posts and the getting-started information for FastMCP 2.0, I discovered it was frighteningly straightforward to transform my operate into an MCP Server/Instrument. I simply wanted so as to add 5 traces of code.

from fastmcp import FastMCP

mcp = FastMCP("NetAI Hi there World")

@mcp.software()
def send_show_command()
    .
    .


if __name__ == "__main__":
    mcp.run()

Properly.. it was ALMOST that straightforward. I did must make just a few changes to the above fundamentals to get it to run efficiently. You’ll be able to see the full working copy of the code in my newly created NetAI-Studying undertaking on GitHub.

As for these few changes, the modifications I made have been:

  • A pleasant, detailed docstring for the operate behind the software. MCP purchasers use the main points from the docstring to know how and why to make use of the software.
  • After some experimentation, I opted to make use of “http” transport for the MCP server somewhat than the default and extra frequent “STDIO.” The rationale I went this fashion was to organize for the following part of my experimentation, when my pyATS MCP server would possible run throughout the community lab atmosphere itself, somewhat than on my laptop computer. STDIO requires the MCP Consumer and Server to run on the identical host system.

So I fired up the MCP Server, hoping that there wouldn’t be any errors. (Okay, to be sincere, it took a few iterations in growth to get it working with out errors… however I’m doing this weblog publish “cooking present type,” the place the boring work alongside the way in which is hidden. 😉

python netai-mcp-hello-world.py 

╭─ FastMCP 2.0 ──────────────────────────────────────────────────────────────╮
│                                                                            │
│        _ __ ___ ______           __  __  _____________    ____    ____     │
│       _ __ ___ / ____/___ ______/ /_/  |/  / ____/ __   |___   / __     │
│      _ __ ___ / /_  / __ `/ ___/ __/ /|_/ / /   / /_/ /  ___/ / / / / /    │
│     _ __ ___ / __/ / /_/ (__  ) /_/ /  / / /___/ ____/  /  __/_/ /_/ /     │
│    _ __ ___ /_/    __,_/____/__/_/  /_/____/_/      /_____(_)____/      │
│                                                                            │
│                                                                            │
│                                                                            │
│    🖥️  Server identify:     FastMCP                                             │
│    📦 Transport:       Streamable-HTTP                                     │
│    🔗 Server URL:      http://127.0.0.1:8002/mcp/                          │
│                                                                            │
│    📚 Docs:            https://gofastmcp.com                               │
│    🚀 Deploy:          https://fastmcp.cloud                               │
│                                                                            │
│    🏎️  FastMCP model: 2.10.5                                              │
│    🤝 MCP model:     1.11.0                                              │
│                                                                            │
╰────────────────────────────────────────────────────────────────────────────╯


[07/18/25 14:03:53] INFO     Beginning MCP server 'FastMCP' with transport 'http' on http://127.0.0.1:8002/mcp/server.py:1448
INFO:     Began server course of [63417]
INFO:     Ready for utility startup.
INFO:     Utility startup full.
INFO:     Uvicorn operating on http://127.0.0.1:8002 (Press CTRL+C to stop)

The subsequent step was to configure LMStudio to behave because the MCP Consumer and hook up with the server to have entry to the brand new “send_show_command” software. Whereas not “standardized, “most MCP Shoppers use a really frequent JSON configuration to outline the servers. LMStudio is one in every of these purchasers.

Adding the pyATS MCP server to LMStudioAdding the pyATS MCP server to LMStudio
Including the pyATS MCP server to LMStudio

Wait… in the event you’re questioning, ‘Wright here’s the community, Hank? What machine are you sending the ‘present instructions’ to?’ No worries, my inquisitive buddy: I created a quite simple Cisco Modeling Labs (CML) topology with a few IOL gadgets configured for direct SSH entry utilizing the PATty function.

NetAI Hello World CML NetworkNetAI Hello World CML Network
NetAI Hi there World CML Community

Let’s see it in motion!

Okay, I’m certain you’re able to see it in motion.  I do know I certain was as I used to be constructing it.  So let’s do it!

To start out, I instructed the LLM on how to connect with my community gadgets within the preliminary message.

Telling the LLM about my devicesTelling the LLM about my devices
Telling the LLM about my gadgets

I did this as a result of the pyATS software wants the tackle and credential info for the gadgets.  Sooner or later I’d like to take a look at the MCP servers for various supply of reality choices like NetBox and Vault so it could possibly “look them up” as wanted.  However for now, we’ll begin easy.

First query: Let’s ask about software program model information.

Short video of the asking the LLM what version of software is running.Short video of the asking the LLM what version of software is running.

You’ll be able to see the main points of the software name by diving into the enter/output display screen.

Tool inputs and outputsTool inputs and outputs

That is fairly cool, however what precisely is occurring right here? Let’s stroll via the steps concerned.

  1. The LLM consumer begins and queries the configured MCP servers to find the instruments out there.
  2. I ship a “immediate” to the LLM to contemplate.
  3. The LLM processes my prompts. It “considers” the completely different instruments out there and in the event that they is likely to be related as a part of constructing a response to the immediate.
  4. The LLM determines that the “send_show_command” software is related to the immediate and builds a correct payload to name the software.
  5. The LLM invokes the software with the correct arguments from the immediate.
  6. The MCP server processes the referred to as request from the LLM and returns the consequence.
  7. The LLM takes the returned outcomes, together with the unique immediate/query as the brand new enter to make use of to generate the response.
  8. The LLM generates and returns a response to the question.

This isn’t all that completely different from what you may do in the event you have been requested the identical query.

  1. You’d think about the query, “What software program model is router01 operating?”
  2. You’d take into consideration the alternative ways you may get the data wanted to reply the query. Your “instruments,” so to talk.
  3. You’d determine on a software and use it to collect the data you wanted. Most likely SSH to the router and run “present model.”
  4. You’d assessment the returned output from the command.
  5. You’d then reply to whoever requested you the query with the correct reply.

Hopefully, this helps demystify just a little about how these “AI Brokers” work beneath the hood.

How about yet another instance? Maybe one thing a bit extra advanced than merely “present model.” Let’s see if the NetAI agent may help establish which change port the host is related to by describing the fundamental course of concerned.

Right here’s the query—sorry, immediate, that I undergo the LLM:

Prompt asking a multi-step question of the LLM.Prompt asking a multi-step question of the LLM.
Immediate asking a multi-step query of the LLM.

What we should always discover about this immediate is that it’ll require the LLM to ship and course of present instructions from two completely different community gadgets. Identical to with the primary instance, I do NOT inform the LLM which command to run. I solely ask for the data I want. There isn’t a “software” that is aware of the IOS instructions. That data is a part of the LLM’s coaching knowledge.

Let’s see the way it does with this immediate:

The multi-step LLM response.The multi-step LLM response.
The LLM efficiently executes the multi-step plan.

And take a look at that, it was capable of deal with the multi-step process to reply my query.  The LLM even defined what instructions it was going to run, and the way it was going to make use of the output.  And in the event you scroll again as much as the CML community diagram, you’ll see that it accurately identifies interface Ethernet0/2 because the change port to which the host was related.

So what’s subsequent, Hank?

Hopefully, you discovered this exploration of agentic AI software creation and experimentation as attention-grabbing as I’ve. And perhaps you’re beginning to see the chances on your personal day by day use. If you happen to’d prefer to attempt a few of this out by yourself, you’ll find every thing you want on my netai-learning GitHub undertaking.

  1. The mcp-pyats code for the MCP Server. You’ll discover each the straightforward “good day world” instance and a extra developed work-in-progress software that I’m including further options to. Be happy to make use of both.
  2. The CML topology I used for this weblog publish. Although any community that’s SSH reachable will work.
  3. The mcp-server-config.json file that you could reference for configuring LMStudio
  4. A “System Immediate Library” the place I’ve included the System Prompts for each a fundamental “Mr. Packets” community assistant and the agentic AI software. These aren’t required for experimenting with NetAI use circumstances, however System Prompts might be helpful to make sure the outcomes you’re after with LLM.

A few “gotchas” I needed to share that I encountered throughout this studying course of, which I hope may prevent a while:

First, not all LLMs that declare to be “skilled for software use” will work with MCP servers and instruments. Or no less than those I’ve been constructing and testing. Particularly, I struggled with Llama 3.1 and Phi 4. Each appeared to point they have been “software customers,” however they didn’t name my instruments. At first, I believed this was resulting from my code, however as soon as I switched to Gemma 2, they labored instantly. (I additionally examined with Qwen3 and had good outcomes.)

Second, when you add the MCP Server to LMStudio’s “mcp.json” configuration file, LMStudio initiates a connection and maintains an energetic session. Because of this in the event you cease and restart the MCP server code, the session is damaged, supplying you with an error in LMStudio in your subsequent immediate submission. To repair this subject, you’ll have to both shut and restart LMStudio or edit the “mcp.json” file to delete the server, put it aside, after which re-add it. (There may be a bug filed with LMStudio on this downside. Hopefully, they’ll repair it in an upcoming launch, however for now, it does make growth a bit annoying.)

As for me, I’ll proceed exploring the idea of NetAI and the way AI brokers and instruments could make our lives as community engineers extra productive. I’ll be again right here with my subsequent weblog as soon as I’ve one thing new and attention-grabbing to share.

Within the meantime, how are you experimenting with agentic AI? Are you excited in regards to the potential? Any ideas for an LLM that works properly with community engineering data? Let me know within the feedback under. Speak to you all quickly!

Join Cisco U. | Be part of the  Cisco Studying Community at present at no cost.

Study with Cisco

X | Threads | Fb | LinkedIn | Instagram | YouTube

Use  #CiscoU and #CiscoCert to hitch the dialog.

Share:


RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments