In this workshop, get an explanation of what AI agents are, how they work, and a survey of current AI frameworks and how to implement agents in them including LangChain, CrewAI, and Microsoft’s AutoGen. We'll also discuss writing custom tools for agents, multiagent implementations, and using datasets with agents for Agentic RAG.
MCP, or Model Context Protocol, is a standardized framework that allows AI agents to seamlessly connect with external data sources, APIs, and tools. Its main purpose is to make AI agents more intelligent and context-aware by giving them real-time access to live information and actionable capabilities beyond their built-in knowledge.
In this workshop, we'll learn what MCP is, how it works, and how it can be used to create AI agents that can work with any process that implements MCP.
In this intensive 3-hour hands-on workshop, you'll learn to master the art and science of prompt engineering. Learn systematic frameworks for constructing effective prompts, from foundational elements to cutting-edge techniques including multi-expert prompting, probability-based optimization, and incentive framing. Through five progressive labs using Ollama and llama3.2:3b in GitHub Codespaces, you'll build production-ready templates and see quality improvements in real-time. Leave with immediately applicable techniques, reusable prompt patterns, and a decision framework for selecting the right approach for any AI task.
Large language models (LLMs) are great and provide a lot of utility out of the box - as long as you accept that the data they have is only as recent as their training. And, as long as you understand that that they don't have context of any data they weren't trained on - which may be the local context you really care about. In this session, we'll see how to augment their training and knowledge to include other content that they weren't trained on, such as your files. And we'll see how to seamlessly combine that with the LLM's existing knowledge and query/prompt interfaces.
Just as CI/CD and other revolutions in DevOps have changed the landscape of the software development lifecycle (SDLC), so Generative AI is now changing it again. Gen AI has the potential to simplify, clarify, and lessen the cycles required across multiple phases of the SDLC. Join us to look at the multiple ways we can leverage AI to simplify and streamline software development from start to finish. We'll include hands-on exercises to demonstrate key points.
HuggingFace.co is the premier site for all things around large language models (LLMs). It has free and open source models and datasets to easily access, download and start working with. As well, it provides the well-known Transformers library of functions that you can use with Python to code up quick and simple apps that interface with LLMs for chatbots, sentiment analysis, classification, etc.
Join us for an exploration of the site and learn how to navigate it and use its many features. We'll also see how to use its models and the Transformers library with simple Python coding to do typical LLM interactions.
Generative AI is everywhere these days. But there are so many parts of it and so much to understand that it can be overwhelming and confusing for anyone not already immersed in it. In this full-day workshop, open-source author, trainer, and technologist Brent Laster will explain the concepts and working of Generative AI from the ground up. You’ll learn about core concepts like neural networks all the way through to working with Large Language Models (LLM), Retrieval Augmented Generation (RAG) and AI Agents. Along the way we’ll explain integrated concepts like embeddings, vector databases and the current ecosystem around LLMs including sites like HuggingFace and frameworks like LangChain. And, for the key concepts, you’ll be doing hands-on labs using Python and a pre-configured environment to internalize the learning.
LM Studio is an easy to use desktop app for experimenting with local and open-source Large Language Models (LLMs) by hosting them on your own system.
Hugging Face is a community hub focused on creating and sharing AI models. It provides many free and pre-trained models as well as datasets and tools to use with them.
Ollama is a command line tool for downloading, exploring, and using LLMs on your local system.
In this hands-on workshop, we'll cover the basics of getting up and running with LM Studio, Ollama and give you hands-on labs where you can use them and Hugging Face to find and load and run LLMs, interact with it via Chat and Python code and more!
GitHub Copilot is a generative AI tool for coding that assists developer in writing code more efficiently and faster. This full-day course will help you gain a comprehensive understanding of the tool's capabilities and how to use it effectively in your day-to-day coding.
In this full-day class, we'll cover the basics of Copilot and provide you with hands-on experience through labs. You'll learn the what, why, and how of Copilot and see how to leverage its generative AI functionality in daily coding tasks across multiple languages. You'll also learn key techniques and best practices for working with Copilot.
Would you like to learn to use GitHub Copilot to write tests for you? Would you like to learn how to use it to tell you how to test in a language or framework that you're not familiar with? Would you like to have it help you find edge cases to test, or validate inputs? Did you know you can have Copilot generate the tests before the code and then the code to implement Test-Driven Development?
If you're a software engineer, writing tests is part of your job. And, likely not the thing you want to spend most of your time on. Join us in this half-day workshop, as we present material on multiple ways to leverage Copilot for testing your code on any platform and framework.
This 1/2 day workshop introduces participants to Claude Code, Anthropic’s AI-powered coding assistant. In three hours, attendees will learn how to integrate Claude Code into their development workflow, leverage its capabilities for productivity, and avoid common pitfalls. The workshop also introduces the concept of subagents (specialized roles like Planner, Tester, Coder, Refactorer, DocWriter) to show how structured interactions can improve accuracy and collaboration.
Format: 3-hour interactive workshop (2 × 90-minute sessions + 30-minute break).
Audience: Developers and technical professionals with basic programming knowledge.
Contact us to schedule or for more information.
This hands-on workshop teaches developers how to build modern AI-driven applications using local LLMs, intelligent agents, and standardized tool protocols. Participants learn to move beyond simple prompting to architect AI systems that integrate reasoning, retrieval, and external data sources. The course emphasizes real-world implementation through step-by-step labs in GitHub Codespaces, requiring no local installs.
By the end of the workshop, attendees will have created functional AI agents that use Ollama for local model execution, FastMCP for standardized tool communication, and RAG (Retrieval-Augmented Generation) for context-grounded responses—all deployable to a cloud environment.