Lessons Learned : Setting up an MCP registry for Github Copilot to enable enterprise governance


While GitHub Copilot significantly boosts developer productivity, its ability to integrate with external tools through the Model Context Protocol (MCP) necessitates a robust governance framework to prevent unvetted data access. As per the official documentation, an internal MCP registry that acts as a centralized gateway, replacing fragmented local configurations with a single source of truth for approved servers. This allows us to curate a catalog of vetted tools, ensuring the security standards across the organization.

We selected the Playwright MCP server as our flagship test case because it addressed a critical gap in our toolchain and was the most requested integration among our engineering teams. From a developer experience standpoint, this setup allows engineers to author, execute, and troubleshoot end-to-end tests directly within the GitHub Copilot chat, effectively transforming the IDE into a live automation environment. By allowing the AI to interact with a real browser instance through the governed registry, we have significantly minimized context-switching and accelerated the overall testing lifecycle.

So, we built the registry using community version . Its essentially 3 APIs as Endpoint and specification requirements.

A valid registry must support URL routing and follow the v0.1 MCP registry specification, including the following endpoints:

GET /v0.1/servers: Returns a list of all included MCP servers
GET /v0.1/servers/{serverName}/versions/latest: Returns the latest version of a specific server
GET /v0.1/servers/{serverName}/versions/{version}: Returns the details for a specific version of a server

After successfully deploying our internal registry and modifying the GitHub enterprise policies settings, we encountered a baffling roadblock: VS Code was picking up the registry configurations perfectly, yet it was hard-blocking every server in the toolchain. It was a classic 'dark hole' scenario that forced us to look deeper into the handshake between the IDE's security layer and our governance APIs.

As we were debugging the issue, I got curios about the error message in the IDE. I remembered that VS code itself is opensource and why don't I try to understand the inner workings of the validation. There we go, and we have some new information. So, While the Github documentation speaks about the matching of the name of the server name, the actual validation covers name, description and version. Also the schema of the response should be of appropriate version. AI was of much help to get this information easily out.



Also, I had to create a input schema to add the server in appropriate way. After much try including with Microsoft Copilot and Google Gemini, I got the working version as below,
{
  "$schema": "https://static.modelcontextprotocol.io/schemas/2025-10-17/server.schema.json",

  "name": "com.microsoft/playwright-mcp",
  "version": "0.0.54",

  "title": "Playwright MCP",
  "description": "Browser automation using Playwright with structured accessibility snapshots.",

  "websiteUrl": "https://github.com/microsoft/playwright-mcp",
  "repository": {
    "url": "https://github.com/microsoft/playwright-mcp",
    "source": "github"
  },

  "icons": [
    {
      "src": "https://raw.githubusercontent.com/microsoft/playwright-mcp/main/assets/icon.png",
      "type": "image/png",
      "size": 128
    }
  ],

  "packages": [
    {
      "registryType": "npm",
      "identifier": "@playwright/mcp",
      "version": "0.0.54",

      "transport": { "type": "stdio" },

      "packageArguments": [],
      "environmentVariables": []
    }
  ]
}

Most important thing is, we should install our MCP Server to be from the IDE using the "packages" tag so that all the validation parameters are correct. With this all the issues were resolved and we could use Playwright MCP Server in VS Code. Also it blocked others servers demonstrating the validity of registry settings. I have also added a pull request to improve the documentation on Github docs.





When AI learns from what you wrote - An example in the context of AI Web Scraping debate


 The use of content created by others to train AI models has been a significant debate, on topics such as intellectual property rights, fair compensation, and the very nature of creativity. Proponents of AI training argue that it is a "transformative" fair use, akin to how a human learns by consuming and being inspired by a vast array of existing works. They contend that AI systems are not simply copying and pasting content, but rather analyzing patterns and relationships within massive datasets to generate new, original outputs.

On the other hand, critics, particularly content creators, artists, and writers, argue that training AI on their work without permission or compensation constitutes a form of theft. They believe that AI companies are profiting from their labor, and that AI-generated content can directly compete with and devalue the original human-made creations, threatening their livelihoods.

Recently, there is a online battle when Cloudflare accused Perplexity of systematically ignoring website blocks and masking its identity to scrape data from sites, as reported in this article. So, I decided to do a little experiment on some content I created, to see how AI reads and uses it.

I have picked my top answer to a stackoverflow questions, and asked the same question to Perplexity.



As expected, it gives the answer and provides a reference to my answer in SO. If it were google, user would have visited the site to know but here answer is available right there, with even better explanations.

The question is whether the assistance AI provides, even for a simple task like writing this blog post, justifies the lack of credit given to the original creators whose content trained the AI.


An experiment with Vibe coding on replit - where it works and where it fails




AI Coding Agents

AI coding agents are advanced tools designed to assist developers in writing, debugging, and optimizing code. These agents leverage artificial intelligence and machine learning models to understand natural language commands, generate code snippets, and even handle full-stack development tasks.

Vibe Coding
Vibe coding is a programming technique where developers leverage AI tools to generate code based on natural language prompts, focusing on the desired outcome rather than the technical implementation. Instead of writing code, you describe what you want your app to do, and AI tools handle the technical aspects, including coding.

Replit

Replit is an online Integrated Development Environment (IDE) that allows users to write, run, and deploy code directly from their browser. It supports over 50 programming languages, including Python, JavaScript, and C++ 


Replit is designed for accessibility and ease of use, making it ideal for beginners and experienced developers alike.


Key features of Replit include:


  • Real-time collaboration: Multiple users can work on the same project simultaneously.
  • Built-in hosting and deployment: Users can deploy their applications directly from the platform.
  • AI-powered assistance: The Ghostwriter feature provides code suggestions, autocompletion, and explanations
  • No setup required: Everything runs in the cloud, eliminating the need for local installations 
  • Replit is particularly popular for rapid prototyping, educational purposes, and collaborative coding projects.

Attempt to build an application on Replit

My objectives were, to create a fairly complex application, front-end, API Integrations, and Backend with DB.

So, this was the initial prompt,

A web-based Engineering Management Dashboard that aggregates team activities from GitHub and JIRA, providing consolidated insights for engineering managers to track their team's progress.

Core Features:

Team management with ability to create teams and add/remove developers

Integration with GitHub API to track code contributions (commits, pull requests, reviews)

Integration with JIRA API to monitor work item status and updates

Chronological activity feed for each developer combining both GitHub and JIRA updates with date filtering

Team dashboard showing consolidated metrics and individual developer summary screens

Visual References:

Inspired by GitHub's activity feed and JIRA's project dashboards, combining clean data presentation with actionable insights.

My Observations:

Excellent ability to create an initial prototype. It knows so much about the domain, to understood the context and did lot of customization by itself. For ex. Created widgets, used native Github and JIRA terminologies, . I liked the communication back to the user. You will be blown away by the output of the first version!

An application build plan was discussed and asked for approval. Code fairly well organized and easy to understand.



The choice of frameworks, components and node modules looks very new and modern.

Replit created the DB schema as below, It's very decent for the purpose. 


 I was very impressed by the creation of persistence layer, together with insertion of the seed data. 

It would have taken around 20 hours for a developer to build this app, replit did it in 20 mins!.  

Adding the first feature

I asked for this feature,

    Ability to add team members in the team view with below attributes - Name, Email, GitId, Jira Id

Agent performed below task, and displayed below message. Again impressed by the communication.

I'm analyzing our current progress on implementing the "Add Team Member" feature.
✓ Created add-team-member-dialog component
✓ Created add-team-member-form component
✓ Created a dedicated add-developer page
✓ Updated the team page with an "Add Developer" button
✓ Added route for the add-developer page


But I could see that all the files were updated, but the feature was not working. I looked at the code and realized that it was not getting triggered. So, I manually triggered by custom url, the page appeared.

Next, I wanted to update the GitHub and JIRA API keys to test if they are working. I found it very difficult to navigate the flow and test the changes. As I was trying multiple attempt to debug the issues, I realized that I need to move to the "Developer" leading mode. Agent sounded confused for many questions.

At this stage, I could not continue on the replit agent, I took out the code to my local to continue with developer lead way (AI as assistant).


Conclusions

The ease with which non-developers can customize applications generated by AI coding tools varies depending on the complexity of the desired modifications and the specific platform being used. Non-developers should approach AI coding for prototyping with realistic expectations, recognizing that while these tools are powerful, they are not a substitute for software development expertise, especially when dealing with complex applications or aiming for production-ready code.

Agentic AI tools (Vibe Coding) are useful for non-developers to to create quick prototypes. It works best with UI heavy applications built with HTML, CSS, Java Script and Node. How ever further customization of the Agent build application will not be smooth sailing. There are rapid development in capability, but AI Agents are not reached a stage were one can create a production grade full-stack application without the involvement of expert developers.

Building Evolutionary Architectures - Book review








In this blog, I want to speak about the book Building Evolutionary Architectures by Neal Ford, Rebecca Parsons, and Patrick Kua. I have attended Neal's conference talk on this topic and heard from many other speakers about the fitness functions. That’s the reason I wanted to read and understand the concepts mentioned in the book.

As the title implies, the book talks about building evolutionary architecture. The question that the book trying to solve is, how do we make sure, our software architecture stays intact with the changing requirements? How do we build the system which can adapt to future needs or how do we know the decision that we are taking is not impacting the architecture of the system?

The book speaks about fitness functions, to solve this concern. An architectural fitness function provides an objective integrity assessment of some architectural characteristics. So, in a system there may be many characteristics that we want to measure, so you would write separate fitness functions for each of them. In the book, a fitness function is not defined in a concrete way but rather in an abstract form of graphs, tests or any other methods by which we know that our system is doing good with the change. This means you would still need to use your intellect not only to write the fitness function but also to make sense of them.

For me, the best thing about the book is, it provides software architects with a rich vocabulary to communicate their intentions behind their choices to a variety of audiences, including business stakeholders, teams, and engineering leadership. The book also gives you a survey of various architectural approaches. It also talks about some of the practical ideas on how do we implement evolutionary architectures I particularly like the focus on the organizational factors and how does it apply to the software architectures.

In conclusion, I would recommend this book to any software architect. Use it as your communication guide, use it to improve your vocabulary, use it to get a sense of what is happening across the industry, so that you could choose what best for your situation.

4 ways to contribute to the community for a software developer


If you are a software professional and looking for something new to start here are the 4 things to try for!

1. Attend a community event or user group gettogether or local meetup


Image result for user group meeting

2. Answer questions at the stackoverflow or contribute to support forums

Image result for stackoverflow

3. Share your experience via blog or twitter or other forums with the community.

Image result for blogger + twitter

4. Contribute to opensource 


Image result for open source

When to stay with modular monoliths over microservices


We have seen the developments in the microservices architecture maturing, where by more and more people are trying to evaluate the benefits before jumping on to the unknown trajectory.

In the talk titled When to stay with modular monoliths over microservices at Oracle Code, Bangalore, I tried to discuss these points. You can view the slides below.



According to me, Over simplified version of decision tree come down to two criteria's, Business Context & Relative Scaling. I tried to explore the same in my presentation. As Martin Fowler puts it, you shouldn't start a new project with microservices, even if you're sure your application will be big enough to make it worthwhile.




Here is a link to the YouTube Recording of the session. Let me know what you think about these topics.

Practical communication strategies for software architects



Here is a video recording of my session titled Practical communication strategies for software architects on Bangalore software architect meetup.


The session covers communication ideas for various stages and to different stakeholders in a project scenario.

 
Practical communication strategies for software architects from Manu Pk


Have a look at the video recording of the session