github
57 TopicsNew GitHub Copilot Global Bootcamp: Now with Virtual and In-Person Workshops!
From June 17 to July 10, you can learn from anywhere in the world — online or in your own city! The GitHub Copilot Global Bootcamp started in February as a fully virtual learning journey — and it was a hit. More than 60,000 developers joined the first edition across multiple languages and regions. Now, we're excited to launch the second edition — bigger and better — featuring both virtual and in-person workshops, hosted by tech communities around the globe. This new edition arrives shortly after the announcements at Microsoft Build 2025, where the GitHub and Visual Studio Code teams revealed exciting news: The GitHub Copilot Chat extension is going open source, reinforcing transparency and collaboration. AI is being deeply integrated into Visual Studio Code, now evolving into an open source AI editor. New APIs and tools are making it easier than ever to build with AI and LLMs. This bootcamp is your opportunity to explore these new tools, understand how to use GitHub Copilot effectively, and be part of the growing global conversation about AI in software development.Using DeepSeek-R1 on Azure with JavaScript
The pace at which innovative AI models are being developed is outstanding! DeepSeek-R1 is one such model that focuses on complex reasoning tasks, providing a powerful tool for developers to build intelligent applications. The week, we announced its availability on GitHub Models as well as on Azure AI Foundry. In this article, we’ll take a look at how you can deploy and use the DeepSeek-R1 models in your JavaScript applications. TL;DR key takeaways DeepSeek-R1 models focus on complex reasoning tasks, and is not designed for general conversation You can quickly switch your configuration to use Azure AI, GitHub Models, or even local models with Ollama. You can use OpenAI Node SDK or LangChain.js to interact with DeepSeek models. What you'll learn here Deploying DeepSeek-R1 model on Azure. Switching between Azure, GitHub Models, or local (Ollama) usage. Code patterns to start using DeepSeek-R1 with various libraries in TypeScript. Reference links DeepSeek on Azure - JavaScript demos repository Azure AI Foundry OpenAI Node SDK LangChain.js Ollama Requirements GitHub account. If you don't have one, you can create a free GitHub account. You can optionally use GitHub Copilot Free to help you write code and ship your application even faster. Azure account. If you're new to Azure, get an Azure account for free to get free Azure credits to get started. If you're a student, you can also get free credits with Azure for Students. Getting started We'll use GitHub Codespaces to get started quickly, as it provides a preconfigured Node.js environment for you. Alternatively, you can set up a local environment using the instructions found in the GitHub repository. Click on the button below to open our sample repository in a web-based VS Code, directly in your browser: Once the project is open, wait a bit to ensure everything has loaded correctly. Open a terminal and run the following command to install the dependencies: npm install Running the samples The repository contains several TypeScript files under the samples directory that demonstrate how to interact with DeepSeek-R1 models. You can run a sample using the following command: npx tsx samples/<sample>.ts For example, let's start with the first one: npx tsx samples/01-chat.ts Wait a bit, and you should see the response from the model in your terminal. You'll notice that it may take longer than usual to respond, and see a weird response that starts with a <think> tag. This is because DeepSeek-R1 is designed to be used for task that need complex reasoning, like solving problems or answering math questions, and not for you usual chat interactions. Model configuration By default, the repository is configured to use GitHub Models, so you can run any example using Codespaces without any additional setup. While it's great for quick experimentation, GitHub models limit the number of requests you can make in a day and the amount of data you can send in a single request. If you want to use the model more extensively, you can switch to Azure AI or even use a local model with Ollama. You can take a look at the samples/config.ts to see how the different configurations are set up. We'll not cover using Ollama models in this article, but you can find more information in the repository documentation. Deploying DeepSeek-R1 on Azure To experiment with the full capabilities of DeepSeek-R1, you can deploy it on Azure AI Foundry. Azure AI Foundry is a platform that allows you to deploy, manage and develop with AI models quickly. To use Azure AI Foundry, you need to have an Azure account. Let's start by deploying the model on Azure AI Foundry. First, follow this tutorial to deploy a serverless endpoint with the model. When it's time to choose the model, make sure to select the DeepSeek-R1 model in the catalog. Once your endpoint is deployed, you should be able to see your endpoint details and retrieve the URL and API key: Screenshot showing the endpoint details in Azure AI Foundry Then create a .env file in the root of the project and add the following content: AZURE_AI_BASE_URL="https://<your-deployment-name>.<region>.models.ai.azure.com/v1" AZURE_AI_API_KEY="<your-api-key>" Tip: if you're copying the endpoint from the Azure AI Foundry portal, make sure to add the /v1 at the end of the URL. Open the samples/config.ts file and update the default export to use Azure: export default AZURE_AI_CONFIG; Now all samples will use the Azure configuration. Explore reasoning with DeepSeek-R1 Now that you have the model deployed, you can start experimenting with it. Open the samples/08-reasoning.ts file to see how the model handles more complex tasks, like helping us understand a well-known weird piece of code. const prompt = ` float fast_inv_sqrt(float number) { long i; float x2, y; const float threehalfs = 1.5F; x2 = number * 0.5F; y = number; i = *(long*)&y; i = 0x5f3759df - ( i >> 1 ); y = *(float*)&i; y = y * ( threehalfs - ( x2 * y * y ) ); return y; } What is this code doing? Explain me the magic behind it. `; Now run this sample with the command: npx tsx samples/08-reasoning.ts You should see the model's response streaming piece by piece in the terminal, while describing its thought process before providing the actual answer to our question. Screenshot showing the model's response streaming in the terminal Brace yourself, as it might take a while to get the full response! At the end of the process, you should see the model's detailed explanation of the code, along with some context around it. Leveraging frameworks Most examples in this repository are built with the OpenAI Node SDK, but you can also use LangChain.js to interact with the model. This might be especially interested if you need to integrate other sources of data or want to build a more complex application. Open the file samples/07-langchain.ts to have a look at the setup, and see how you can reuse the same configuration we used with the OpenAI SDK. Going further Now it's your turn to experiment and discover the full potential of DeepSeek-R1! You can try more advanced prompts, integrate it into your larger application, or even build agents to make the most out of the model. To continue your learning journey, you can check out the following resources: Generative AI with JavaScript (GitHub): code samples and resources to learn Generative AI with JavaScript. Build a serverless AI chat with RAG using LangChain.js (GitHub): a next step code example to build an AI chatbot using Retrieval-Augmented Generation and LangChain.js.Introducing Azure AI Travel Agents: A Flagship MCP-Powered Sample for AI Travel Solutions
We are excited to introduce AI Travel Agents, a sample application with enterprise functionality that demonstrates how developers can coordinate multiple AI agents (written in multiple languages) to explore travel planning scenarios. It's built with LlamaIndex.TS for agent orchestration, Model Context Protocol (MCP) for structured tool interactions, and Azure Container Apps for scalable deployment. TL;DR: Experience the power of MCP and Azure Container Apps with The AI Travel Agents! Try out live demo locally on your computer for free to see real-time agent collaboration in action. Share your feedback on our community forum. We’re already planning enhancements, like new MCP-integrated agents, enabling secure communication between the AI agents and MCP servers, adding support for A2A and more. NOTE: This example uses mock data and is intended for demonstration purposes rather than production use. The Challenge: Scaling Personalized Travel Planning Travel agencies grapple with complex tasks: analyzing diverse customer needs, recommending destinations, and crafting itineraries, all while integrating real-time data like trending spots or logistics. Traditional systems falter with latency, scalability, and coordination, leading to delays and frustrated clients. The AI Travel Agents tackles these issues with a technical trifecta: LlamaIndex.TS orchestrates six AI agents for efficient task handling. MCP equips agents with travel-specific data and tools. Azure Container Apps ensures scalable, serverless deployment. This architecture delivers operational efficiency and personalized service at scale, transforming chaos into opportunity. LlamaIndex.TS: Orchestrating AI Agents The heart of The AI Travel Agents is LlamaIndex.TS, a powerful agentic framework that orchestrates multiple AI agents to handle travel planning tasks. Built on a Node.js backend, LlamaIndex.TS manages agent interactions in a seamless and intelligent manner: Task Delegation: The Triage Agent analyzes queries and routes them to specialized agents, like the Itinerary Planning Agent, ensuring efficient workflows. Agent Coordination: LlamaIndex.TS maintains context across interactions, enabling coherent responses for complex queries, such as multi-city trip plans. LLM Integration: Connects to Azure OpenAI, GitHub Models or any local LLM using Foundy Local for advanced AI capabilities. LlamaIndex.TS’s modular design supports extensibility, allowing new agents to be added with ease. LlamaIndex.TS is the conductor, ensuring agents work in sync to deliver accurate, timely results. Its lightweight orchestration minimizes latency, making it ideal for real-time applications. MCP: Fueling Agents with Data and Tools The Model Context Protocol (MCP) empowers AI agents by providing travel-specific data and tools, enhancing their functionality. MCP acts as a data and tool hub: Real-Time Data: Supplies up-to-date travel information, such as trending destinations or seasonal events, via the Web Search Agent using Bing Search. Tool Access: Connects agents to external tools, like the .NET-based customer queries analyzer for sentiment analysis, the Python-based itinerary planning for trip schedules or destination recommendation tools written in Java. For example, when the Destination Recommendation Agent needs current travel trends, MCP delivers via the Web Search Agent. This modularity allows new tools to be integrated seamlessly, future-proofing the platform. MCP’s role is to enrich agent capabilities, leaving orchestration to LlamaIndex.TS. Azure Container Apps: Scalability and Resilience Azure Container Apps powers The AI Travel Agents sample application with a serverless, scalable platform for deploying microservices. It ensures the application handles varying workloads with ease: Dynamic Scaling: Automatically adjusts container instances based on demand, managing booking surges without downtime. Polyglot Microservices: Supports .NET (Customer Query), Python (Itinerary Planning), Java (Destination Recommandation) and Node.js services in isolated containers. Observability: Integrates tracing, metrics, and logging enabling real-time monitoring. Serverless Efficiency: Abstracts infrastructure, reducing costs and accelerating deployment. Azure Container Apps' global infrastructure delivers low-latency performance, critical for travel agencies serving clients worldwide. The AI Agents: A Quick Look While MCP and Azure Container Apps are the stars, they support a team of multiple AI agents that drive the application’s functionality. Built and orchestrated with Llamaindex.TS via MCP, these agents collaborate to handle travel planning tasks: Triage Agent: Directs queries to the right agent, leveraging MCP for task delegation. Customer Query Agent: Analyzes customer needs (emotions, intents), using .NET tools. Destination Recommendation Agent: Suggests tailored destinations, using Java. Itinerary Planning Agent: Crafts efficient itineraries, powered by Python. Web Search Agent: Fetches real-time data via Bing Search. These agents rely on MCP’s real-time communication and Azure Container Apps’ scalability to deliver responsive, accurate results. It's worth noting though this sample application uses mock data for demonstration purpose. In real worl scenario, the application would communicate with an MCP server that is plugged in a real production travel API. Key Features and Benefits The AI Travel Agents offers features that showcase the power of MCP and Azure Container Apps: Real-Time Chat: A responsive Angular UI streams agent responses via MCP’s SSE, ensuring fluid interactions. Modular Tools: MCP enables tools like analyze_customer_query to integrate seamlessly, supporting future additions. Scalable Performance: Azure Container Apps ensures the UI, backend and the MCP servers handle high traffic effortlessly. Transparent Debugging: An accordion UI displays agent reasoning providing backend insights. Benefits: Efficiency: LlamaIndex.TS streamlines operations. Personalization: MCP’s data drives tailored recommendations. Scalability: Azure ensures reliability at scale. Thank You to Our Contributors! The AI Travel Agents wouldn’t exist without the incredible work of our contributors. Their expertise in MCP development, Azure deployment, and AI orchestration brought this project to life. A special shoutout to: Pamela Fox – Leading the developement of the Python MCP server. Aaron Powell and Justin Yoo – Leading the developement of the .NET MCP server. Rory Preddy – Leading the developement of the Java MCP server. Lee Stott and Kinfey Lo – Leading the developement of the Local AI Foundry Anthony Chu and Vyom Nagrani – Leading Azure Container Apps roadmap Matt Soucoup and Julien Dubois – Leading the ACA DevRel strategy Wassim Chegham – Architected MCP and backend orchestration. And many more! See the GitHub repository for all contributors. Thank you for your dedication to pushing the boundaries of AI and cloud technology! Try It Out Experience the power of MCP and Azure Container Apps with The AI Travel Agents! Try out live demo locally on your computer for free to see real-time agent collaboration in action. Conclusion Developers can explore today the open-source project on GitHub, with setup and deployment instructions. Share your feedback on our community forum. We’re already planning enhancements, like new MCP-integrated agents, enabling secure communication between the AI agents and MCP servers, adding support for A2A and more. This is still a work in progress and we also welcome all kind of contributions. Please fork and star the repo to stay tuned for updates! ◾️We would love your feedback and continue the discussion in the Azure AI Discord https://v012qbntdn.proxynodejs.usequeue.com/AI/discord On behalf of Microsoft DevRel Team.Visual Studio Code: Editor de IA de Código Abierto
El equipo de Visual Studio Code anuncia un paso importante hacia el futuro de los editores de código: la integración de la inteligencia artificial en un entorno completamente abierto. Con el compromiso de mantener la transparencia, la colaboración y el enfoque en la comunidad, VS Code abrirá el código de su extensión GitHub Copilot Chat y llevará sus capacidades de IA al core del editor, reafirmando así la apuesta por el software libre y el desarrollo impulsadVisual Studio Code agora é um editor de código aberto com IA integrada!
Acreditamos que o futuro dos editores de código deve ser aberto e impulsionado por inteligência artificial. Na última década, o VS Code se tornou um dos projetos de código aberto (OSS) mais bem-sucedidos do GitHub. Somos gratos à nossa vibrante comunidade de contribuidores e usuários que escolhem o VS Code justamente por ele ser open source. À medida que a IA se torna parte essencial da experiência de desenvolvimento no VS Code, queremos manter nossos princípios fundamentais: abertura, colaboração e desenvolvimento orientado pela comunidade. Vamos tornar o código da extensão GitHub Copilot Chat open source sob a licença MIT e, em seguida, refatorar cuidadosamente os componentes relevantes dessa extensão para integrá-los ao núcleo do VS Code. Este é o próximo passo — e o mais lógico — para tornar o VS Code um editor de IA verdadeiramente open source. Isso reflete o fato de que ferramentas baseadas em IA são parte central da forma como escrevemos código e reafirma nossa convicção de que trabalhar de forma aberta resulta em um produto melhor para os usuários, além de fomentar um ecossistema diverso de extensões. Por que tornar open source agora? Nos últimos meses, observamos mudanças importantes no desenvolvimento de IA que nos motivaram a migrar o desenvolvimento de IA no VS Code de um modelo fechado para um modelo open source: Os modelos de linguagem de grande escala melhoraram significativamente, reduzindo a necessidade de estratégias “secretas” de prompt. As abordagens de experiência do usuário mais populares e eficazes para interações com IA se tornaram comuns entre os editores. Queremos permitir que a comunidade refine e construa sobre esses elementos de UI ao disponibilizá-los em uma base de código estável e aberta. Surgiu um ecossistema de ferramentas de IA open source e extensões para o VS Code. Queremos facilitar o trabalho dos autores dessas extensões para que possam construir, depurar e testar suas soluções — o que hoje é bastante desafiador sem acesso ao código-fonte da extensão Copilot Chat. Recebemos muitas perguntas sobre os dados coletados por editores com IA. Tornar a extensão Copilot Chat open source permitirá que qualquer pessoa veja exatamente quais dados são coletados, aumentando a transparência. Atores mal-intencionados têm mirado cada vez mais ferramentas de IA para desenvolvedores. Ao longo da história do VS Code como projeto OSS, as contribuições e issues da comunidade nos ajudaram a identificar e corrigir problemas de segurança com agilidade. Próximos passos Nas próximas semanas, trabalharemos para tornar o código da extensão GitHub Copilot Chat open source e para refatorar os recursos de IA da extensão, integrando-os ao núcleo do VS Code. Nossas prioridades principais continuam as mesmas: oferecer alta performance, extensibilidade poderosa e uma interface intuitiva e bonita. O código aberto funciona melhor quando há comunidades construindo sobre uma base estável e compartilhada. Por isso, nosso objetivo é tornar a contribuição para os recursos de IA tão simples quanto contribuir com qualquer outra parte do VS Code. A natureza estocástica dos modelos de linguagem torna especialmente desafiador testar funcionalidades de IA e alterações nos prompts. Para facilitar esse processo, também vamos abrir o código da nossa infraestrutura de testes de prompts, garantindo que os pull requests da comunidade possam ser validados com testes adequados. Como sempre, você poderá acompanhar nosso plano de iteração, onde forneceremos mais informações sobre esse trabalho. Também manteremos nossa seção de perguntas frequentes (FAQ) atualizada com dúvidas enviadas pela comunidade. Valorizamos muito seu feedback enquanto damos vida a essa visão - compartilhe! Estamos empolgados para moldar o futuro do desenvolvimento como um editor de IA open source — e esperamos que você se junte a nós nessa jornada de construção aberta. Happy coding! Equipe do VS CodeVS Code: 오픈 소스 AI 코드 에디터
우리는 코드 에디터 미래는 개방형이어야 하고 AI를 적극적으로 받아들여야 한다고 믿습니다. 지난 10년간 VS Code는 GitHub에서 가장 성공적인 오픈 소스 프로젝트 중 하나로 자리잡았습니다. VS Code를 선택해 주시는 활발한 커뮤니티의 기여자와 사용자 여러분께 감사드립니다. 여러분이 VS Code를 사용하는 가장 큰 이유 중 하나는 그것이 오픈 소스라는 점일 것입니다. 이제 VS Code에서 AI가 개발자 경험의 핵심이 되면서, 우리는 개방성, 협업, 커뮤니티 중심 개발이라는 VS Code의 개발 철학을 계속 지켜가고자 합니다. 이에 우리는 GitHub Copilot Chat 익스텐션의 소스 코드를 MIT 라이선스로 오픈 소스화하고, 이후 이 확장의 관련 기능들을 VS Code의 코어로 정리하여 통합할 예정입니다. 이것은 VS Code를 오픈 소스 AI 에디터로 만드는 다음 단계이자 자연스러운 진화입니다. AI 기반 도구가 코드 작성 방식의 핵심이 된 오늘날, 우리는 개방적인 방식으로 개발하는 것이 더 나은 제품을 만들고, 더 다양한 확장 생태계를 조성한다는 신념을 재확인하고 있습니다.Mastering Query Fields in Azure AI Document Intelligence with C#
Introduction Azure AI Document Intelligence simplifies document data extraction, with features like query fields enabling targeted data retrieval. However, using these features with the C# SDK can be tricky. This guide highlights a real-world issue, provides a corrected implementation, and shares best practices for efficient usage. Use case scenario During the cause of Azure AI Document Intelligence software engineering code tasks or review, many developers encountered an error while trying to extract fields like "FullName," "CompanyName," and "JobTitle" using `AnalyzeDocumentAsync`: The error might be similar to Inner Error: The parameter urlSource or base64Source is required. This is a challenge referred to as parameter errors and SDK changes. Most problematic code are looks like below in C#: BinaryData data = BinaryData.FromBytes(Content); var queryFields = new List<string> { "FullName", "CompanyName", "JobTitle" }; var operation = await client.AnalyzeDocumentAsync( WaitUntil.Completed, modelId, data, "1-2", queryFields: queryFields, features: new List<DocumentAnalysisFeature> { DocumentAnalysisFeature.QueryFields } ); One of the reasons this failed was that the developer was using `Azure.AI.DocumentIntelligence v1.0.0`, where `base64Source` and `urlSource` must be handled internally. Because the older examples using `AnalyzeDocumentContent` no longer apply and leading to errors. Practical Solution Using AnalyzeDocumentOptions. Alternative Method using manual JSON Payload. Using AnalyzeDocumentOptions The correct method involves using AnalyzeDocumentOptions, which streamlines the request construction using the below steps: Prepare the document content: BinaryData data = BinaryData.FromBytes(Content); Create AnalyzeDocumentOptions: var analyzeOptions = new AnalyzeDocumentOptions(modelId, data) { Pages = "1-2", Features = { DocumentAnalysisFeature.QueryFields }, QueryFields = { "FullName", "CompanyName", "JobTitle" } }; - `modelId`: Your trained model’s ID. - `Pages`: Specify pages to analyze (e.g., "1-2"). - `Features`: Enable `QueryFields`. - `QueryFields`: Define which fields to extract. Run the analysis: Operation<AnalyzeResult> operation = await client.AnalyzeDocumentAsync( WaitUntil.Completed, analyzeOptions ); AnalyzeResult result = operation.Value; The reason this works: The SDK manages `base64Source` automatically. This approach matches the latest SDK standards. It results in cleaner, more maintainable code. Alternative method using manual JSON payload For advanced use cases where more control over the request is needed, you can manually create the JSON payload. For an example: var queriesPayload = new { queryFields = new[] { new { key = "FullName" }, new { key = "CompanyName" }, new { key = "JobTitle" } } }; string jsonPayload = JsonSerializer.Serialize(queriesPayload); BinaryData requestData = BinaryData.FromString(jsonPayload); var operation = await client.AnalyzeDocumentAsync( WaitUntil.Completed, modelId, requestData, "1-2", features: new List<DocumentAnalysisFeature> { DocumentAnalysisFeature.QueryFields } ); When to use the above: Custom request formats Non-standard data source integration Key points to remember Breaking changes exist between preview versions and v1.0.0 by checking the SDK version. Prefer `AnalyzeDocumentOptions` for simpler, error-free integration by using built-In classes. Ensure your content is wrapped in `BinaryData` or use a direct URL for correct document input: Conclusion In this article, we have seen how you can use AnalyzeDocumentOptions to significantly improves how you integrate query fields with Azure AI Document Intelligence in C#. It ensures your solution is up-to-date, readable, and more reliable. Staying aware of SDK updates and evolving best practices will help you unlock deeper insights from your documents effortlessly. Reference Official AnalyzeDocumentAsync Documentation. Official Azure SDK documentation. Azure Document Intelligence C# SDK support add-on query field.183Views0likes0CommentsIs it a bug or a feature? Using Prompty to automatically track and tag issues.
Introduction You’ve probably noticed a theme in my recent posts: tackling challenges with AI-powered solutions. In my latest project, I needed a fast way to classify and categorize GitHub issues using a predefined set of tags. The tag data was there, but the connections between issues and tags weren’t. To bridge that gap, I combined Azure OpenAI Service, Prompty, and a GitHub to automatically extract and assign the right labels. By automating issue tagging, I was able to: Streamline contributor workflows with consistent, on-time labels that simplify triage Improve repository hygiene by keeping issues well-organized, searchable, and easy to navigate Eliminate repetitive maintenance so the team can focus on community growth and developer empowerment Scale effortlessly as the project expands, turning manual chores into intelligent automation Challenge: 46 issues, no tags The Prompty repository currently hosts 46 relevant, but untagged, issues. To automate labeling, I first defined a complete tag taxonomy. Then I built a solution using: Prompty for prompt templating and function calling Azure OpenAI (gpt-4o-mini) to classify each issue Azure AI Search for retrieval-augmented context (RAG) Python to orchestrate the workflow and integrate with GitHub By the end, you’ll have an autonomous agent that fetches open issues, matches them against your custom taxonomy, and applies labels back on GitHub. Prerequisites: An Azure account with Azure AI Search and Azure OpenAI enabled Python and Prompty installed Clone the repo and install dependencies: pip install -r requirements.txt Step 1: Define the prompt template We’ll use Prompty to structure our LLM instructions. If you haven’t yet, install the Prompty VS Code extension and refer to the Prompty docs to get started. Prompty combines: Tooling to configure and deploy models Runtime for executing prompts and function calls Specification (YAML) for defining prompts, inputs, and outputs Our Prompty is set to use gpt-4o-mini and below is our sample input: sample: title: Including Image in System Message tags: ${file:tags.json} description: An error arises in the flow, coming up starting from the "complete" block. It seems like it is caused by placing a static image in the system prompt, since removing it causes the issue to go away. Please let me know if I can provide additional context. The inputs will be the tags file implemented using RAG, then we will fetch the issue title and description from GitHub once a new issue is posted. Next, in our Prompty file, we gave instructions of how the LLLM should work as follows: system: You are an intelligent GitHub issue tagging assistant. Available tags: ${inputs} {% if tags.tags %} ## Available Tags {% for tag in tags.tags %} name: {{tag.name}} description: {{tag.description}} {% endfor %} {% endif %} Guidelines: 1. Only select tags that exactly match the provided list above 2. If no tags apply, return an empty array [] 3. Return ONLY a valid JSON array of strings, nothing else 4. Do not explain your choices or add any other text Use your understanding of the issue and refer to documentation at https://1aps7vjhou.proxynodejs.usequeue.com/ to match appropriate tags. Tags may refer to: - Issue type (e.g., bug, enhancement, documentation) - Tool or component (e.g., tool:cli, tracer:json-tracer) - Technology or integration (e.g., integration:azure, runtime:python) - Conceptual elements (e.g., asset:template-loading) Return only a valid JSON array of the issue title, description and tags. If the issue does not fit in any of the categories, return an empty array with: ["No tags apply to this issue. Please review the issue and try again."] Example: Issue Title: "App crashes when running in Azure CLI" Issue Body: "Running the generated code in Azure CLI throws a Python runtime error." Tag List: ["bug", "tool:cli", "runtime:python", "integration:azure"] Output: ["bug", "tool:cli", "runtime:python", "integration:azure"] user: Issue Title: {{title}} Issue Description: {{description}} Once the Prompty file was ready, I right clicked on the file and converted it to Prompty code, which provided a Python base code to get started from, instead of building from scratch. Step 2: enrich with context using Azure AI Search To be able to generate labels for our issues, I created a sample of tags, around 20, each with a title and a description of what it does. As a starting point, I started with Azure AI Foundry, where I uploaded the data and created an index. This typically takes about 1hr to successfully complete. Next, I implemented a retrieval function: def query_azure_search(query_text): """Query Azure AI Search for relevant documents and tags.""" search_client = SearchClient( endpoint=SEARCH_SERVICE_ENDPOINT, index_name=SEARCH_INDEX_NAME, credential=AzureKeyCredential(SEARCH_API_KEY) ) # Perform the search results = search_client.search( search_text=query_text, query_type=QueryType.SIMPLE, top=5 # Retrieve top 5 results ) # Extract content and tags from results documents = [doc["content"] for doc in results] tags = [doc.get("tags", []) for doc in results] # Assuming "tags" is a field in the index # Flatten and deduplicate tags unique_tags = list(set(tag for tag_list in tags for tag in tag_list)) return documents, unique_tags Step 3: Orchestrate the Workflow In addition, to adding RAG, I added functions in the basic.py file to: fetch_github_issues: calls the GitHub REST API to list open issues and filters out any that already have labels. run_with_rag: on the issues selected, calls the query_azure_search to append any retrieved docs, tags the issues and parses the JSON output from the prompt to a list for the labels label_issue: patches the issue to apply a list of labels. process_issues: this fetches all unlabelled issues, extracts the rag pipeline to generate the tags, and calls the labels_issue tag to apply the tags scheduler loop: this runs every so often to check if there's a new issue and apply a label Step 4: Validate and Run Ensure all .env variables are set (API keys, endpoints, token). Install dependencies and execute using: python basic.py Create a new GitHub issue and watch as your agent assigns tags in real time. Below is a short demo video here to illustrate the workflow. Next Steps Migrate from PATs to a GitHub App for tighter security Create multi-agent application and add an evaluator agent to review tags before publishing Integrate with GitHub Actions or Azure Pipelines for CI/CD Conclusion and Resources By combining Prompty, Azure AI Search, and Azure OpenAI, you can fully automate GitHub issue triage—improving consistency, saving time, and scaling effortlessly. Adapt this pattern to any classification task in your own workflows! You can learn more using the following resources: Prompty documentation to learn more on Prompty Agents for Beginners course to learn how you can build your own agentVS Code Live: Agent Mode Day Highlights
🎙️ Featuring: Olivia McVicker, Cassidy Williams, Burke Holland, Harald Kirschner, Toby Padilla, Rob Lourens, Tim Rogers, James Montemagno, Don Jayamanne, Brigit Murtaugh, Chris Harrison. What is Agent Mode? Agent Mode in VS Code represents a leap beyond traditional AI code completion. Instead of simply suggesting code snippets, Agent Mode empowers the AI to: Write, edit, and iterate on code Run terminal commands autonomously Fix its own mistakes during the workflow Interact with external tools, APIs, and services This creates a more dynamic, "agentic" coding partner that can automate complex tasks, reduce manual intervention, and keep developers in their flow. Agent Mode is accessible directly in VS Code and integrates seamlessly with GitHub Copilot, making advanced AI capabilities available to developers of all levels. 1. Model Context Protocol (MCP) MCP is an open protocol that standardizes how applications provide context to Large Language Models (LLMs). This protocol acts as a bridge, allowing AI agents in VS Code to securely connect with a vast ecosystem of tools, APIs, and internal resources—well beyond what’s available out-of-the-box. Key Points about MCP: Ecosystem Approach: MCP enables developers to connect Copilot and other agents to everything they need—internal documentation, databases, design tools like Figma, and more—by exposing these resources as MCP servers. Open and Extensible: Similar to the Language Server Protocol (LSP), MCP allows anyone to build and share new integrations. There are already thousands of MCP servers available, and building your own is straightforward with SDKs in major languages. Tool Chaining: MCP servers can be composed together, allowing the AI to automate workflows involving multiple tools—like fetching data, updating issues, or running tests—without manual intervention. Secure and Evolving: The protocol is evolving to support more secure authentication (moving from local API keys to OAuth-based flows) and easier discovery and installation, making it safer and more user-friendly. 2. Next Edit Suggestions (NES) Next Edit Suggestions (NES) is a feature designed to enhance editing experience by providing context-aware recommendations for code changes. NES helps developers by: Suggesting relevant edits based on the current code context Supporting tasks like refactoring, bug fixing, and feature additions Allowing developers to focus on logic and architecture rather than syntax NES works hand-in-hand with Agent Mode, letting developers quickly accept, undo, or iterate on AI-generated changes, all while maintaining control and oversight of their codebase. 3. Bring Your Own Key (BYOK): Custom AI Model Integration Bring Your Own Key (BYOK) allows users to connect their own API keys for custom AI models, including those from OpenAI, Anthropic, or even local models via platforms like Olama. With BYOK, developers can: Choose from a variety of AI models, including the latest GPT-4o Mini and Claude 3 Opus Integrate local or cloud-based models for privacy, cost, or performance reasons Tailor the AI experience to their specific needs and preferences This flexibility ensures that developers are not locked into a single provider and can experiment with the latest advancements in AI as soon as they become available. 4. The Agentic Future A preview of "Project Padawan" demonstrated how agentic workflows could soon run asynchronously in the cloud. Developers will be able to assign tasks to Copilot (such as resolving GitHub issues), which will autonomously create pull requests, run tests, and iterate on feedback—all while the developer focuses on other work. GitHub Copilot Skills Challenge Want to learn and experiment with GitHub Copilot's Agent Mode while earning a Digital Badge? Join the GitHub Copilot Skills Challenge—a hands-on learning experience on Microsoft Learn where you'll build a Python app using Agent Mode through a guided tutorial. Complete the challenge by April 30, 2025, to earn your badge! Check out the official rules: https://v012qbntdn.proxynodejs.usequeue.com/csc/terms Register to the Challenge: https://v012qbntdn.proxynodejs.usequeue.com/csc/githubcopilot Request your badge at https://v012qbntdn.proxynodejs.usequeue.com/getyourbadge Getting Started Enable Agent Mode—available in both stable and insiders builds—to start experimenting with advanced AI capabilities, and explore MCP Integrations by browsing the growing list of servers or building your own with the available SDKs. Want to see these features in action? Watch the full stream for live demos, expert tips, and save the date for the next episode of our VS Code Live series: https://v012qbntdn.proxynodejs.usequeue.com/VSCode/Live Happy Coding!