In the summer of 2006, I discovered the blossoming world of web APIs: HTTP APIs like the Flickr API, JavaScript APIs like Google Maps API, and platform APIs like the iGoogle gadgets API. I spent my spare time making "mashups": programs that connected together multiple APIs to create new functionality. For example:
A search engine that found song lyrics from Google and their videos from YouTube
A news site that combined RSS feeds from multiple sources
A map plotting Flickr photos alongside travel recommendations
I adored the combinatorial power of APIs, and felt like the world was my mashable oyster. Mashups were actually the reason that I got back into web development, after having left it for a few years.
And now, with the growing popularity of MCP servers, I am getting a sense of deja vu.
An MCP server is an API: it exposes functionality that another program can use. An MCP server must expose the API in a very strict way, outputting the tools definition to follow the MCP schema. That allows MCP clients to use the tools (API) from any MCP server, since their interface is predictable.
But now it is no longer the programmers that are making the mashups: it's the agents. When using Claude Deskop, you can register MCP servers for searching videos, and Claude can match song lyrics to videos for you. When using GitHub Copilot Agent Mode, you can register the Playwright MCP server for browser automation, and it can write full documentation with screenshots for you. When using any of the many agent frameworks (Autogen, Openai-Agents, Pydantic AI, Langgraph, etc), you can point your Agent at MCP servers, and the agent will call the most relevant tools as needed, weaving them together with calls to LLMs to extract or summarize information. To really empower an agent to make the best mashups, give them access to a Code interpreter, and then they can write and run code to put all the tool outputs together.
And so, we have brought mashups back, but we programmers are no longer the mashers. That is a good thing for non-programmers, as it is empowering people of all backgrounds to weave together their favorite data and functionality. It makes me a bit sad as a programmer, because I love to make direct API calls and control the exact flow of a program. But it is time for me to start managing the mashers and to see what they can create in the hands of others.
Back in 2014, I had a brush with cervical cancer. We fortunately caught it when it was stage 0, the point at which it's not even called cancer, and is called adenocarcinoma instead. I went in for surgery, a procedure called cervical conization, where the doctor basically scrapes the potentially cancerous area out of the cervix and then biopsies the remaining cells to make sure they caught all the sketchy cells.
After the surgery, the doctor told me, "I tried to leave enough cervix for you to have children naturally in the future, but call us when you're done having kids so we can schedule you for a hysterectomy." Apparently, the best way to reduce the risk of future cervical cancer is to remove the cervix entirely, along with the nearby fallopian tubes and uterus. That was really jarring to hear at the time, because I wasn't even close to having kids - I wasn't emotionally ready to be a mom, nor was I in a relationship that was at a settling down point - and I could already feel the doctors eye'ing my reproductive organs for removal.
Well, 11 years later, the doctors finally got their wish (aka, my organs)! I met my partner 7 years ago, we decided to have kids, and I popped out one daughter in 2019 and our second in 2021. After every birth, my doctor would ask if I was ready to stop having kids. We both originally considered having 3 kids, but by the time the second daughter was 2 years old, I realized I was personally ready to say goodbye to my baby-making days. Why?
The Reasons
Pregnancy is really rough. The first pregnancy, I was awed by my body's ability to mutate into a womb-carrying machine, and that was enough distraction from the extreme bodily discomfort. By the second pregnancy, I was over it. I had "morning sickness" most of the first trimester, to the point that I actually lost weight due to my general disgust around food. I was so tired to the point that I qualified as "clinically depressed" (I really like having energy to do things, so I get depressed when I realized I don't have energy to do anything). I had more energy and less nauseu in the second and third trimesters, but then I was just constantly annoyed that my massive belly made it impossible for me to do my favorite things, like biking and cartwheels. And then, there's labor! But let's not get into the details of why that sucked.. at least, that only lasts a few days and not 9 months.
Breastfeeding is boring and injurious. I ended up breastfeeding both my kids for two years, as it somewhat worked logistically, and seemed easier than formula in some ways (no bottle prep). However, I did not find it to be a magical mommy-baby bonding experience. It was just my body being used as a vending machine for very hungry babies for up to 10 hours a day, and me trying to find a way to bide my time while they got their nutrients. I eventually found ways to fill the boredom, thanks to my nursing-friendly computer setups, but I then got multiple nursing-related injuries with the second daughter. I will not detail them here, but once again… rough! If I did have a third, I would consider formula more seriously,, but I fear my inner DIY-er would guilt me into breastfeeding once again.
I am outnumbered. Two daughters, one of me. I can't make them both happy at the same time. I am constantly referee'ing, trying to make calls about whose toy is whose, who hit who first, who really ought to share more, whose turn it is to talk. It is exhausting. When both my partner and I are taking care of them, we can divide and conquer, but if we had a third: we would *always* be outnumbered.
Transportation logistics. We have two car seats in our Forester, we can't fit a third. I have only two seats on my e-bike, I can't fit a third kid. I have a two-kid wagon. I have two hands for holding hands while walking. Etc, etc! Two kids fit pretty well, three kids would require refactoring.
I like having my bodily autonomy back. It was such a great feeling when I finally stopped being pregnant/nursing and could start making decisions solely to benefit my body, without worrying about the effect on children. I stopped feeling so ravenously hungry all the time, and rapidly dropped the 40 pounds I'd gained from motherhood. I could finally rid my feet of a pregnancy-induced 5-year-duration fungus (I know, gross!) with an oral antifungal that I wasn't allowed to take while pregnant/nursing. It is absolutely amazing that women give their bodies up in order to propagate the human race, but it's also amazing when we get to take control of our bodies again.
Housing logistics. We have a 2-bedroom house in the bay area. Our two daughters are currently sharing a room (somewhat happily?) and my partner and I share a room (happily). If we had a third kid, we'd likely have to divide up our house somehow, or move houses. Doable, but not trivial.
I love my kids. I want to end with this reason to make something clear: My daughters are lovely, creative, hilarious, souls! I am thankful that I was able to bring them into the world, and witness their growth into little humans. By keeping our family smaller, I'll be able to spend more time with them going forward, and not be distracted by new additions. I look forward to many adventures!
The Surgery
Once I was feeling totally certain of the decision, about 6 months ago, I notified my doctor. It took some time before the surgery could actually happen, since I needed to find a time that worked around my work obligations and was free in the doctor's schedule. In the meantime, we discussed exactly what kind of hysterectomy I would get, since there are multiple reproductive organs that can be removed.
What we decided:
Organ
Notes
Decision
Cervix
Obviously, this was on the chopping block, due to it being the site of pre-cancer before.
🔪Remove!
Uterus
The uterus is only needed if having more babies, and multiple cancers can start in the uterus.
🔪Remove!
Fallopian tubes
These also typically get removed, as ovarian cancer often starts in the tubes (not the ovaries, confusingly). I had a grandmother who got ovarian cancer twice, so it seems helpful to remove the organs where it likely started.
🔪Remove!
Ovaries
This was the trickiest decision, as the ovaries are responsible for the hormonal cycle. When a hysterectomy removes the ovaries, that either kicks off menopause early or, to avoid that, you have to take hormones until the age you would naturally start menopause (10 years, for me). Apparently both early menopause and the hormone treatment are associated with other cancers/illnesses, so my doctor recommended keeping the ovaries.
🥚Keep!
Getting rid of three organs seems like kind of a big deal, but the surgery can be done in a minimally invasive way, with a few incisions in the abdomen and a tiny camera to guide the surgeon around. It's still a major surgery requiring general anesthesia, however, which was what worried me the most: what if I never woke up?? Fortunately, my best friend is an anesthesiologist at Johns Hopkins and she told me that I'm more likely to be struck by lightning.
My surgery was scheduled for first thing in the morning, so I came in at 6am, got prepped by many kind and funny nurses, and got wheeled into the OR at 8am. The last thing I remember was the anesthesiologist telling me something, and then boom, five hours later, I awoke in another room.
The Recovery
Upon first waking, I was convinced that balloons were popping all around me, and I kept darting my eyes around trying to find the balloons. The nurse tried to reassure me that it was the anesthesia wearing off, and I both totally believed her, but also very much wanted to locate the source of the balloon popping sounds. 👀 🎈
Once the popping stopped, she made sure that I was able to use the bathroom successfully (in case of accidental bladder injury, one of the hysterectomy risks), and then I was cleared to go home! I got home around 2pm, and thus began my recovery journey.
I'll go through each side effect, in order of disappearance.
Fatigue (Days 1 + 2)
That first day, the same day that I actually had the surgery, I was so very sleepy. I could not keep my eyes open for more than an hour, even to watch an amazing spiderman movie (the multiverse). I slept most of the rest of that day.
The second day, I felt sleepy still, but never quite sleepy enough to nap. I would frequently close my eyes and see hypnagogic visions flutter by, and sometimes go lie in my bed to just rest.
The third day, I felt like I had my energy back, with no particular sleepiness.
Nausea (Days 1 + 2)
I was warned by the anesthesiologist that it was common to experience nausea after general anesthesia, especially for women of my age, so they preemptively placed a nausea patch behind my ear during the surgery. The nausea patch has some funky side effects, like double vision that meant I couldn't look at text on a computer screen for more than a few minutes. I missed being able to use a computer, so I took off the patch on the second night. By the next morning, my vision was restored and I was able to code again!
Abdominal soreness (Days 1-5)
My doctor warned me that I would feel like "you've just done 1000 crunches". I did feel some abdominal soreness/cramping during the first few days, but it felt more like… 100 crunches? It probably helped that I was on a regular schedule of pain medicine: alternating between Ibuprofen and Tylenol every 3 hours, plus Gabapentin 3 times a day. I also wore an abdominal binder the first few days, to support the abdominal muscles. I never felt like my pain was strong enough to warrant also taking the narcotic that they gave me, and I'm happy that I avoided needing that potentially addictive medicine.
There was one point on Day 5 where I started cracking up due to a stuck-peach-situation at the grocery store, and I tried to stop laughing because it hurt so bad… but gosh darn we just couldn't rescue that peach! Lessons learned: don't laugh while you're in recovery, and do not insert a peach into a cupholder that's precisely the same radius as the peach. 🍑
Collarbone soreness (Days 4-6)
My collarbone hurt more than my abdomen, strangely enough. I believe that's due to the way they inflate the torso with gas during the surgery, and the after-effects of that gas on the upper part of the torso. It weirded me out, but it was also a fairly tolerable pain.
Sore throat (Days 1-7)
This was the most surprising and persisting side effect, and it was due to the breathing tube put down my throat during general anesthesia. Apparently, when a surgery is long enough, the patient gets intubated, and that can make your throat really sore after. I couldn't even read a single story to my kids the first few days, and it took me a good week to feel comfortable reading and speaking again. During that week, I drank Throat Coat tea with honey, gargled warm water, sucked on lozenges - anything to get my voice back! It's a good thing that I didn't have to give any talks the week after, as I doubt my voice would have made it through 60 minutes of continuous use.
Surgical wounds (Days 1 - ?)
The doctor made four cuts on my abdomen: one sneaky cut through the belly button, and three other cuts a few inches away from it. They sealed the cuts with liquid glue, which made them look nastier and bigger than they actually were, due to the encrusted blood. The wounds were only painful when I prod at them from particular angles, or more accurately, when my toddler prodded at them from particularly horrible angles.
By Day 18, the liquid glue had came off entirely, revealing scars about 1/2 inch in length. Only the belly button wound still had a scab. According to my doctor, the belly button wound is the most likely to get infected or herniate and takes the longest to heal. Go go gadget belly button!
Activity restrictions (Days 1 - ?)
I stopped taking medicines on day 6, as I didn't feel any of my symptoms warranted medication, and I was generally feeling good. However, I still have many restrictions to ensure optimal healing.
My only allowed physical activity is walking - and I've been walking up the wazoo, since everyone says it helps with recovery. I'm averaging 7K steps daily, whereas I usually average 4K steps. I've realized from this forced-walking experience that I really need to carve out daily walking opportunities, given that I work from home and can easily forget to walk any steps at all. Also, walking is fun when it gives me an excuse to observe nature!
I'm not allowed other physical activity, like biking or yoga. Plus, my body can't be submerged in water, so no baths or swimming. Worst of all: I'm not allowed to lift objects > 20 pounds, which includes my toddler! That's been the hardest restriction, as I have to find other ways to get her into her car seat, wagon, toilet, etc. We mostly play at home, where I can avoid the need for lifting here.
At my 6-week post-op appointment, my doctor will evaluate me in person and hopefully remove all my activity restrictions. Then I'll bike, swim, and lift children to my heart's content! 🚴🏻♀️ 🏊🏻 🏋🏼♀️
There are many ways to learn Python online, but there are also many people out there that want to learn Python for multiple reasons - so hey, why not add one more free Python course into the mix? I'm happy to finally release ProficientPython.com, my own approach to teaching introductory Python.
The course covers standard intro topics - variables, functions, logic, loops, lists, strings, dictionaries, files, OOP. However, the course differs in two key ways from most others:
It is based on functions from the very beginning (instead of being based on side effects).
The coding exercises can be completed entirely in the browser (no Python setup needed).
Let's explore those points in more detail.
A functions-based approach
Many introductory programming courses teach first via "side effects", asking students to either print out values to a console, draw some graphics, manipulate a webpage, that sort of thing. In fact, many of my courses have been side-effects-first, like my Intro to JS on Khan Academy that uses ProcessingJS to draw pictures, and all of our web development workshops for GirlDevelopIt. There's a reason that it's a popular approach: it's fun to watch things happen! But there's also a drawback to that approach: students struggle when it's finally time to abstract their code and refactor it into functions, and tend not to use custom functions even when their code would benefit from them.
When I spent a few years teaching Python at UC Berkeley for CS61A, the first course in the CS sequence, I was thrown heads-first into the pre-existing curriculum. That course had originally been taught 100% in Scheme, and it stayed very functions-first when they converted it to Python in the 2000s. (I am explicitly avoiding calling it "functional programming" as functional Python is a bit more extreme than functions-first Python.) Also, CS61A had thousands of students, and functions-based exercises were easier to grade at scale - just add in some doctests! It was my first time teaching an intro course with a functions-first approach, and I grew to really appreciate the benefits for both student learning and classroom scaling.
That's why I chose to use the same approach for ProficientPython.com. The articles are interweaved with coding exercises, and each exercise is a mostly empty function definition with doctests. For example:
When a learner wants to check their work, they run the tests, and it will let them know if any tests have failed:
Each unit also includes a project, which is a Jupyter notebook with multiple function/class definitions. Some of the definitions already have doctests, like this project 2 function:
Sometimes, the learners must write their own doctests, like for this project 1 function:
When I'm giving those projects to a cohort of students, I will also look at their code and give them feedback, as the projects are the most likely place to spot bad practices. Even if a function passes all of its tests, that doesn't mean it's perfect: it may have performance inefficiencies, it may not cover all edge cases, or it just may not be fully "Pythonic".
There's a risk to this functions-based approach: learners have to wrap their minds around functional abstraction very early on, and that can be really tricky for people who are brand new to programming. I provide additional resources in the first unit, like videos of me working through similar exercises, to help those learners get over that hump.
Another drawback is that the functions-based approach doesn't feel quite as "fun" at first glance, especially for those of us who love drawing shapes on the screen and are used to creating K-12 coding courses. I tried to make the exercises interesting in the topics that they tackle, like calculating dog ages or telling fortunes. For the projects, many of them combine function definitions with side effects, such as displaying images, getting inputs from the user, and printing out messages.
Browser-based Python exercises
As programming teachers know, one of the hardest parts of teaching programming is the environment setup: getting every student machine configured with the right Python version, ensuring the right packages are installed, configuring the IDE with the correct extensions, etc. I think that it's both important for students to learn how to set up their personal programming environment, but also that it doesn't need to be a barrier when initially learning to program. Students can tackle that when they're already excited about programming and what it can do for them, not when they're dabbling and wondering if programming is the right path for them.
For ProficientPython.com, all of the coding can be completed in the browser, via either inline Pyodide-powered widgets for the exercises or Google CoLab notebooks for the projects.
Pyodide-powered coding widgets
Pyodide is a WASM port of Python that can run entirely in the browser, and it has enabled me to develop multiple free browser-based Python learning tools, like Recursion Visualizer and Faded Parsons Puzzles.
For this course, I developed a custom web element that anyone can install from npm: python-code-exercise-element. The element uses Lit, a lightweight framework that wraps the Web Components standards. Then it brings in CodeMirror, the best in-browser code editor, and configures it for Python use.
When the learner selects the "Run Code" or "Run Tests" button, the element spins up a web worker that brings in the Pyodide JS and runs the Python code in the worker. If the code takes too long (> 60 seconds), it assumes there's an infinite loop and gives up.
If the code successfully finishes executing, the element shows the value of the final expression and any standard output that happened along the way:
For a test run, the element parses out the test results and makes them slightly prettier.
The element uses localStorage in the browser to store the user's latest code, and restores code from localStorage upon page load. That way, learners can remember their course progress without needing the overhead of user login and a backend database. I would be happy to add server-side persistence if there's demand, but I love that the course in its current form can be hosted entirely on GitHub Pages for free.
Online Jupyter notebooks
The projects are Jupyter notebooks. Learners can download and complete them in an IDE if they want, but they can also simply save a copy of my hosted Google CoLab notebook and complete them using the free CoLab quota. I recommend the CoLab option, since then it's easy for people to share their projects (via a publicly viewable link), and it's fun to see the unique approaches that people use in the projects.
I have also looked into the possibility of Pyodide-powered Jupyter notebooks. There are several options, like JupyterLite and Marino, but I haven't tried them out yet, since Google CoLab works so well. I'd be happy to offer that as an option if folks want it, however. Let me know in the issue tracker.
Why I made the course
I created the course content originally for Uplimit.com, a startup that initially specialized in public programming courses, and hosted the content in their fantastic interactive learning platform. I delivered that course multiple times to cohorts of learners (typically professionals who were upskilling or switching roles), along with my co-teacher Murtaza Ali who I first met in UC Berkeley CS61A.
We would give the course over a 4-week period, 1 week for each unit, starting off each week with a lecture to introduce the unit topics, offering a special topic lecture halfway through the week, and then ending the week with the project. We got great questions and feedback from the students, and I loved seeing their projects.
Once Uplimit pivoted to be an internal training platform, I decided it was time to share the content with the world, and make it as interactive as possible.
If you try out the course and have any feedback, please post in the discussion forum or issue tracker. Thank you! 🙏🏼
Whenever I am teaching Python workshops, tutorials, or classes, I love to use GitHub Codespaces. Any repository on GitHub can be opened inside a GitHub Codespace, which gives the student a full Python environment and a browser-based VS Code. Students spend less time setting up their environment and more time actually coding - the fun part! In this post, I'll walk through my tips for using Codespaces for teaching Python, particularly for classes about web apps, data science, or generative AI.
Getting started
You can start a GitHub Codespace from any repository. Navigate to the front page of the repository, then select "Code" > "Codespaces" > "Create codespace on main":
By default, the Codespace will build an environment based off a universal Docker image, which includes Python, NodeJS, Java, and other popular languages.
But what if you want more control over the environment?
Dev Containers
A dev container is an open specification for describing how a project should be opened in a development environment, and is supported by several IDEs, including GitHub Codespaces and VS Code (via Dev Containers extension).
To define a dev container for your repository, add a devcontainer.json that describes the desired Docker image, VS Code extensions, and project settings. Let's look at a few examples, from simple to complex.
For example, my python-3.13-playground repository sets up Python 3.13 using one of those images, and also configures a few settings and default extensions:
You can also install OS-level packages in the Dockerfile, using Linux commands like apt-get, as you can see in this fabric-mcp-server Dockerfile.
A devcontainer with docker-compose.yaml
When our dev container is defined with a Dockerfile or image name, the Codespace creates an environment based off a single Docker container, and that is the container that we write our code inside.
It's also possible to setup multiple containers within the Codespace environment, with a primary container for our code development, plus additional services running on other containers. This is a great way to bring in containerized services like PostgreSQL, Redis, MongoDB, etc - anything that can be put in a container and exposed over the container network.
To configure a multi-container environment, add a docker-compose.yaml to the .devcontainer folder. For example, this docker-compose.yaml from my postgresql-playground repository configures a Python container plus a PostgreSQL container:
The devcontainer.json references that docker-compose.yaml file, and declares that the "service" container is the primary container for the environment:
Now let's look at topics you might be teaching in Python classes. One popular topic is web applications built with Python backends, using frameworks like Flask, Django, or FastAPI. A simple webapp can use the Python dev container from earlier, but if the webapp has a database, then you'll want to use the docker-compose setup with multiple containers.
Flask + DB
For example, my flask-db-quiz example configures a Flask backend with PostgreSQL database. The docker-compose.yaml is the same as the previous PostgreSQL example, and the devcontainer.json includes a few additional customizations:
The "portsAttributes" field in devcontainer.json tells Codespaces that we're exposing services at those parts, which makes them easy to find in the Ports tab in VS Code.
Once the app is running, I can click on the URL in the Ports tab and open it in a new window. I can even right-click to change the port visibility, so I can share the URL with classmates or teacher. The URL will only work as long as the Codespace and app are running, but this can be really helpful for quick sharing in class.
Another customization in that devcontainer.json is the addition of the SQLTools extension, for easy browsing of database data. The "sqltools.connection" field sets up everything needed to connect to the local database.
Django + DB
We can use a very similar configuration for Django apps, as demonstrated in my django-quiz-app repository.
By default, Django's built-in security rules are stricter than Flask's, so you may see security errors when using a Django app from the forwarded port's URL, especially when submitting forms. That's because Codespace "local" URLs aren't truly local URLs, and they bake the port into the URL instead of using it as a true port. For example, for a Django app on port 8000, the forwarded URL could be:
I've run into this with other frameworks as well, so if you ever get a cross-site origin error when running web apps in Codespaces, a similar approach may help you resolve the error.
Teaching Generative AI
For the past two years, a lot of my teaching has been around generative AI models, like large language models and embedding models. Fortunately, there are two ways that we can use Codespaces with those models for free.
GitHub Models
My current favorite approach is to use GitHub Models, which are freely available models for anyone with a GitHub Account. The catch is that they're rate limited, so you can only send a certain number of requests and tokens per day to each model, but you can get a lot of learning done on that limited budget.
To use the models, we can point our favorite Python AI package at the GitHub Models endpoint, and pass in a GitHub Personal Access Token (PAT) as the API key. Fortunately, every Codespace exposes a GITHUB_TOKEN environment variable automatically, so we can just access that directly from the env.
For example, this code uses the OpenAI package to connect to GitHub Models:
Alternatively, when you are trying out a GitHub Model from the marketplace, select "Use this Model" to get suggested Python code and open a Codespace with code examples.
My other favorite way to use free generative AI models is Ollama. Ollama is a tool that you can download onto any OS that makes it possible to interact with local language models, especially SLMs (small language models).
On my fairly underpowered Mac M1 laptop, I can run models with up to 8 billion parameters (corresponding to ~5 GB download size). The most powerful LLMs like OpenAI's GPT 4 series typically have a few hundred billion parameters, quite a bit more, but you can get surprisingly good results from smaller models. The Ollama tooling runs a model as efficiently as possible based on the hardware, so it will use a GPU if your machine has one, but otherwise will use various tricks to make the most of the CPU.
I put together an ollama-python playground repo that makes a Codespace with Ollama already downloaded. All of the configuration is done inside devcontainer.json:
I could have installed Ollama using a Dockerfile, but instead, inside the "features" section, I added a dev container feature that takes care of installing Ollama for me. Once the Codespace opens, I can immediately run "ollama pull phi3:mini" and start interacting with the model, and also use Python programs to interact with the locally exposed Ollama API endpoints.
You may run into issues running larger SLMs, however, due to the Codespace defaulting to a 4-core machine with only 16 GB of RAM. In that case, you can change the "hostRequirements" to "32gb" or even "64gb" and restart the Codespace. Unfortunately, that will use up your monthly free Codespace hours at double or quadruple the rate.
Generally, making requests to a local Ollama model will be slower than making to GitHub Models, because they're being processed by relatively underpowered machines that do not have GPUs. That's why I start with GitHub models these days, but support using Ollama as a backup, to have as many options possible.
Teaching Data Science
We can also use Codespaces when teaching data science, when class assignments are more likely to use Jupyter notebooks and scientific computing packages.
If you typically set up your data science environment using anacadonda instead of pip, you can use conda inside the Dockerfile, as demonstrated in my colleague's conda-devcontainer-demo:
FROM mcr.microsoft.com/devcontainers/miniconda:0-3
RUN conda install -n base -c conda-forge mamba
COPY environment.yml* .devcontainer/noop.txt /tmp/conda-tmp/
RUN if [ -f "/tmp/conda-tmp/environment.yml" ]; then umask 0002 \
&& /opt/conda/bin/mamba env create -f /tmp/conda-tmp/environment.yml; fi \
&& rm -rf /tmp/conda-tmp
The corresponding devcontainer.json points the Python interpreter path to that conda environment:
That configuration includes a "postCreateCommand", which tells Codespace to run "conda init" once everything is loaded in the environment, inside the actual VS Code terminal. There are times when it makes sense to use the lifecycle commands like postCreateCommand instead of running a command in the Dockerfile, depending on what the command does.
The extensions above includes both the Python extension and the Jupyter extension, so that students can get started interacting with Jupyter notebooks immediately. Another helpful extension could be Data Wrangler which adds richer data browsing to Jupyter notebooks and can generate pandas code for you.
If you are working entirely in Jupyter notebooks, then you may want the full JupyterLab experience. In that case, it's actually possible to open a Codespace in JupyterLab instead of the browser-based VS Code.
Disabling GitHub Copilot
As a professional software developer, I'm a big fan of GitHub Copilot to aid my programming productivity. However, in classroom settings, especially in introductory programming courses, you may want to discourage the use of coding assistants like Copilot. Fortunately, you can configure a setting inside the devcontainer.json to disable it, either for all files or specifically for Python:
You could also add that to a .vscode/settings.json so that it would take effect even if the student opened the repository in local VS Code, without using the dev container.
Some classrooms then install their own custom-made extensions that offer more of a TA-like coding assistant, which will help the student debug their code and think through the assignment, but not actually provide the code. Check out the research from CS50 at Harvard and CS61A at UC Berkeley.
Optimizing startup time
When you're first starting up a Codespace for a repository, you might be sitting there waiting for 5-10 minutes, as it builds the Docker image and loads in all the extensions. That's why I often ask students to start loading the Codespace at the very beginning of a lesson, so that it's ready by the time I'm done introducing the topics.
Alternatively, you can use pre-builds to speed up startup time, if you've got the budget for it. Follow the steps to configure a pre-build for the repository, and then Codespace will build the image whenever the repo changes and store it for you. Subsequent startup times will only be a couple minutes. Pre-builds use up free Codespace storage quota more quickly, so you may only want to enable them right before a lesson and disable after. Or, ask if your school can provide more Codespace storage budget.
Codespaces is a great way to set up a fully featured environment complete with extensions and services you need in your class. However, there are some drawbacks to using Codespaces in a classroom setting:
Saving work: Students need to know how to use git to be able to fork, commit, and push changes. Often students don't know how to use git, or can get easily confused (like all of us!). If your students don't know git, then you might opt to have them download their changed code instead and save or submit it using other mechanisms. Some teachers also build VS Code extensions for submitting work.
Losing work: By default, Codespaces only stick around for 30 days, so only changes are lost after then. If a student forgets to save their work, they will lose it entirely. Once again, you may need to give students other approaches for saving their work more frequently.
Additional resources
If you're a teacher in a classroom, you can also take advantage of these programs:
For Pycon 2025, I created a poster exploring vector embedding models, which you can download at full-size.
In this post, I'll translate that poster into words.
Vector embeddings
A vector embedding is a mapping from an input (like a word, list of words, or image) into a list of floating point numbers.
That list of numbers represents that input in the multidimensional embedding space of the model. We refer to the length of the list as its dimensions, so a list with 1024 numbers would have 1024 dimensions.
Embedding models
Each embedding model has its own dimension length, allowed input types, similarity space, and other characteristics.
word2vec
For a long time, word2vec was the most well-known embedding model. It could only accept single words, but it was easily trainable on any machine, it is very good at representing the semantic meaning of words. A typical word2vec model outputs vectors of 300 dimensions, though you can customize that during training. This chart shows the 300 dimensions for the word "queen" from a word2vec model that was trained on a Google News dataset:
text-embedding-ada-002
When OpenAI came out with its chat models, it also offered embedding models, like text-embedding-ada-002 which was released in 2022. That model was significant for being powerful, fast, and significantly cheaper than previous models, and is still used by many developers. The text-embedding-ada-002 model accepts up to 8192 "tokens", where a "token" is the unit of measurement for the model (typically corresponding to a word or syllable), and outputs 1536 dimensions. Here are the 1536 dimensions for the word "queen":
Notice the strange spike downward at dimension 196? I found that spike in every single vector embedding generated from the model - short ones, long ones, English ones, Spanish ones, etc. For whatever reason, this model always produces a vector with that spike. Very peculiar!
text-embedding-3-small
In 2024, OpenAI announced two new embedding models, text-embedding-3-small and text-embedding-3-large, which are once again faster and cheaper than the previous model. For this post, we'll use the text-embedding-3-small model as an example. Like the previous model, it accepts 8192 tokens, and outputs 1536 dimensions by default. As we'll see later, it optionally allows you to output less dimensions. Here are the 1536 dimensions for the word "queen":
This time, there is no downward spike, and all of the values look well distributed across the positive and negative.
Similarity spaces
Why go through all this effort to turn inputs into embeddings? Once we have embeddings of different inputs from the same embedding model, then we can compare the vectors using a distance metric, and determine the relative similarity of inputs. Each model has its own "similarity space", so the similarity rankings will vary across models (sometimes only slightly, sometimes significantly). When you're choosing a model, you want to make sure that its similarity rankings are well aligned with human rankings.
For example, let's compare the embedding for "dog" to the embeddings for 1000 common English words, across each of the models, using the cosine similarity metric.
word2vec similarity
For the word2vec model, here are the closest words to "dog", and the similarity distribution across all 1000 words:
As you can see, the cosine similarity values range from 0 to 0.76, with most values clustered between 0 and 0.2.
text-embedding-ada-002 similarity
For the text-embedding-ada-002 model, the closest words and similarity distribution is quite different:
Curiously, the model thinks that "god" is very similar to "dog". My theory is that OpenAI trained this model in a way that made it pay attention to spelling similarities, since that's the main way that "dog" and "god" are similar. Another curiousity is that the similarity values are in a very tight range, between 0.75 and 0.88. Many developers find that unintuitive, as we might see a value of 0.75 initially and think it indicates a very similar value, when it actually is the opposite for this model. That's why it's so important to look at relative similarity values, not absolute. Or, if you're going to look at absolute values, you must calibrate your expectations first based on the standard similarity range of each model.
text-embedding-3-small similarity
The text-embedding-3-small model looks more similar to word2vec in terms of its closest words and similarity distribution:
The most similar words are all similarity in semantics only, no spelling, and the similarity values peak at 0.68, with most values between 0.2 and 0.4. My theory is that OpenAI saw the weirdness in the text-embedding-ada-002 model and cleaned it up in the text-embedding-3 models.
Vector similarity metrics
There are multiple metrics we could possibly use to decide how "similar" two vectors are. We need to get a bit more math-y to understand the metrics, but I've found it helpful to know enough about metrics so that I can pick the right metric for each scenario.
Cosine similarity
The cosine similarity metric is the most well known way to measure the similarity of two vectors, by taking the cosine of the angle between the two vectors.
For cosine similarity, the highest value of 1.0 signifies the two vectors are the most similar possible (overlapping completely, no angle between them). The lowest theoretical value is -1.0, but as we saw earlier, modern embedding models models tend to have a more narrow angular distribution, so all the cosine similarity values end up higher than 0.
Here's the formal definition for cosine similarity:
That formula divides the dot product of the vectors by the product of their magnitudes.
Dot product
The dot product is a metric that can be used on its own to measure similarity. The dot product sums up the products of corresponding vector elements:
Here's what's interesting: cosine similarity and dot product produce the same exact values for unit vectors. How come? A unit vector has a magnitude of 1, so the product of the magnitude of two unit vectors is also 1, which means that the cosine similarity formula simplifies to the dot product formula in that special case.
Many of the popular embedding models do indeed output unit vectors, like text-embedding-ada-002 and text-embedding-3-small. For models like those, we can sometimes get performance speedups from a vector database by using the simpler dot product metric instead of cosine similarity, since the database can skip the extra step to calculate the denominator.
Vector distance metrics
Most vector databases also support distance metrics, where a smaller value indicates higher similarity. Two of them are related to the similarity metrics we just discussed: cosine distance is the complement of cosine similarity, and negative inner product is the negation of the dot product.
The Euclidean distance between two vectors is the straight-line distance between the vectors in multi-dimensional space - the path a bird would take to get straight from vector A to vector B.
The formula for calculating Euclidean distance:
The Manhattan distance is the "taxi-cab distance" between the vectors - the path a car would need to take along each dimension of the space. This distance will be longer than Euclidean distance, since it can't take any shortcuts.
The formula for Manhattan distance:
When would you use Euclidean or Manhattan? We don't typically use these metrics with text embedding models, like all the ones we've been exploring in those post. However, if you are working with a vector where each dimension has a very specific meaning and has been constructed with per-dimension meaning intentionally, then these distance metrics may be the best ones for the job.
Vector search
Once we can compute the similarity between two vectors, we can also compute the similarity between an arbitrary input vector and the existing vectors in a database. That's known as vector search, and it's the primary use case for vector embeddings these days. When we use vector search, we can find anything that is similar semantically, not just similar lexicographically. We can also use vector search across languages, since embedding models are frequently trained on more than just English data, and we can use vector search with images as well, if we use a multimodal embedding model that was trained on both text and images.
When we have a small number of vectors, we can do an exhaustive search, measuring the similarity between the input vector and every single stored vector, and returning the full ranked list.
However, once we start growing our vector database size, we typically need to use an Approximate Nearest Neighbors (ANN) algorithm to search the embedding space heuristically. A popular algorithm is HNSW, but other algorithms can also be used, depending on what your vector database supports and your application requirements.
Algorithm
Python package
Example database support
HNSW
hnswlib
PostgreSQL pgvector extension
Azure AI Search
Chromadb
Weaviate
When our database grows to include millions or even billions of vectors, we start to feel the effects of vector size. It takes a lot of space to store thousands of floating point numbers, and it takes up computation time to calculate their similarity. There are two techniques that we can use to reduce vector size: quantization and dimension reduction.
Scalar quantization
A floating point number requires significant storage space, either 32 bits or 64 bits. The process of scalar quantization turns each floating point number into an 8-bit signed integer. First, the minimum and maximum values are determined, based off either the current known values, or a hardcoded min/max for the given embedding model. Then, each floating point number is re-mapped to a number between -127 to 128.
The resulting list of integers requires ~13% of the original storage, but can still be used for similarity and search, with similar outputs. For example, compare the most similar movie titles to "Moana" between the original floating point vectors and the scalar quantized vectors:
Binary quantization
A more extreme form of compression is binary quantization: turning each floating point number into a single bit, 0 or 1. For this process, the centroid between the minimum and maximum is determined, and any lower value becomes 0 while any higher value becomes 1.
In theory, the resulting list of bits requires only 13% of the storage needed for scalar quantization, but that's only the case if the vector database supports bit-packing - if it has the ability to store multiple bits into a single byte of memory. Incredibly, the list of bits still retains a lot of the original semantic information. Here's a comparison once again for "Moana", this time between the scalar and binary quantized vectors:
Dimension reduction
Another way to compress vectors is to reduce their dimensions - to shorten the length of the list. This is only possible in models that were trained to support Matryoska Representation Learning (MRL). Fortunately, many newer models like text-embedding-3 were trained with MRL and thus support dimension reduction. In the case of text-embedding-3-small, the default/maximum dimension count is 1536, but the model can be reduced all the way down to 256.
You can reduce the dimensions for a vector either via the API call, or you can do it yourself, by slicing the vector and normalizing the result. Here's a comparison of the values between a full 1536 dimension vector and its reduced 256 version, for text-embedding-3-small:
Compression with rescoring
For optimal compression, you can combine both quantization and dimension reduction:
However, you will definitely see a quality degradation for vector search results. There's a way you can both save on storage and get high quality results, however:
For the vector index, use the compressed vectors
Store the original vectors as well, but don't index them
When performing a vector search, oversample: request 10x the N that you actually need
For each result that comes back, swap their compressed vector with original vector
Rescore every result using the original vectors
Only use the top N of the rescored results
That sounds like a fair bit of work to implement yourself, but databases like Azure AI Search offer rescoring as a built-in feature, so you may find that your vector database makes it easy for you.
Additional resources
If you want to keep digging into vector embeddings:
Explore the Jupyter notebooks that generated all the visualizations above
Check out the links at the bottom of each of those notebooks for further learning
If you are using the DefaultAzureCredential class from the Azure Identity SDK while your user account is associated with multiple tenants, you may find yourself frequently running into API authentication errors (such as HTTP 401/Unauthorized). This post is for you!
These are your two options for successful authentication from a non-default tenant:
Setup your environment precisely to force DefaultAzureCredential to use the desired tenant
Use a specific credential class and explicitly pass in the desired tenant ID
Option 1: Get DefaultAzureCredential working
The DefaultAzureCredential class is a credential chain, which means that it tries a sequence of credential classes until it finds one that can authenticate successfully. The current sequence is:
EnvironmentCredential
WorkloadIdentityCredential
ManagedIdentityCredential
SharedTokenCacheCredential
AzureCliCredential
AzurePowerShellCredential
AzureDeveloperCliCredential
InteractiveBrowserCredential
For example, on my personal machine, only two of those credentials can retrieve tokens:
AzureCliCredential: from logging in with Azure CLI (az login)
AzureDeveloperCliCredential: from logging in with Azure Developer CLI (azd auth login)
Many developers are logged in with those two credentials, so it's crucial to understand how this chained credential works. The AzureCliCredential is earlier in the chain, so if you are logged in with that, you must have the desired tenant set as the "active tenant". According to Azure CLI documentation, there are two ways to set the active tenant:
az account set --subscription SUBSCRIPTION-ID where the subscription is from the desired tenant
az login --tenant TENANT-ID, with no subsequent az login commands after
Whatever option you choose, you can confirm that your desired tenant is currently the default by running az account show and verifying the tenantId in the account details shown.
If you are only logged in with the azd CLI and not the Azure CLI, you have a problem: the azd cli does not currently have a way to set the active tenant. If that credential is called with no additional information, azd assumes your home tenant, which may not be desired. The azd credential does check for a system variable called AZURE_TENANT_ID, however, so you can try setting that in your environment before running code that uses DefaultAzureCredential. That should work as long as the DefaultAzureCredential code is truly running in the same environment where AZURE_TENANT_ID has been set.
Option 2: Use specific credentials
Several credential types allow you to explicitly pass in a tenant ID, including both the AzureCliCredential and AzureDeveloperCliCredential. If you know that you’re always going to be logging in with a specific CLI, you can change your code to that credential:
As a best practice, I always like to log out exactly what credential I'm calling and whether I'm passing in a tenant ID, to help me spot any misconfiguration from my logs.
⚠️ Be careful when replacing DefaultAzureCredential if your code will be deployed to a production host! That means you were previously relying on it using the ManagedIdentityCredential in the chain, and that you now need to call that credential class specifically. You will also need to pass in the managed identity ID, if using user-assigned identity instead of system-assigned identity.
For example, using managed identity in the Python SDK with user-assigned identity:
I ❤️ when companies offer free tiers for developer services, since it gives everyone a way to learn new technologies without breaking the bank. Free tiers are especially important for students and people between jobs, where the desire to learn is high but the available cash is low.
That's why I'm such a fan of GitHub Models: free, high-quality generative AI models available to anyone with a GitHub account. The available models include the latest OpenAI LLMs (like o3-mini), LLMs from the research community (like Phi and Llama), LLMs from other popular providers (like Mistral and Jamba), multimodal models (like gpt-4o and llama-vision-instruct) and even a few embedding models (from OpenAI and Cohere). So cool! With access to such a range of models, you can prototype complex multi-model workflows to improve your productivity or heck, just make something fun for yourself. 🤗
To use GitHub Models, you can start off in no-code mode: open the playground for a model, send a few requests, tweak the parameters, and check out the answers. When you're ready to write code, select "Use this model". A screen will pop up where you can select a programming language (Python/JavaScript/C#/Java/REST) and select an SDK (which varies depending on model). Then you'll get instructions and code for that model, language, and SDK.
But here's what's really cool about GitHub Models: you can use them with all the popular Python AI frameworks, even if the framework has no specific integration with GitHub Models. How is that possible?
The vast majority of Python AI frameworks support the OpenAI Chat Completions API, since that API became a defacto standard supported by many LLM API providers besides OpenAI itself.
GitHub Models also provide OpenAI-compatible endpoints for chat completion models.
Therefore, any Python AI framework that supports OpenAI-like models can be used with GitHub Models as well. 🎉
To prove my claim, I've made a new repository with examples from eight different Python AI agent packages, all working with GitHub Models: python-ai-agent-frameworks-demos. There are examples for AutoGen, LangGraph, Llamaindex, OpenAI Agents SDK, OpenAI standard SDK, PydanticAI, Semantic Kernel, and SmolAgents. You can open that repository in GitHub Codespaces, install the packages, and get the examples running immediately.
Now let's walk through the API connection code for GitHub Models for each framework. Even if I missed your favorite framework, I hope my tips here will help you connect any framework to GitHub Models.
OpenAI sdk
I'll start with openai, the package that started it all!
The code above demonstrates the two key parameters we'll need to configure for all frameworks:
api_key: When using OpenAI.com, you pass your OpenAI API key here. When using GitHub Models, you pass in a Personal Access Token (PAT). If you open the repository (or any repository) in GitHub Codespaces, a PAT is already stored in the GITHUB_TOKEN environment variable. However, if you're working locally with GitHub Models, you'll need to generate a PAT yourself and store it. PATs expire after a while, so you need to generate new PATs every so often.
base_url: This parameter tells the OpenAI client to send all requests to "https://models.inference.ai.azure.com" instead of the OpenAI.com API servers. That's the domain that hosts the OpenAI-compatible endpoint for GitHub Models, so you'll always pass that domain as the base URL.
If we're working with the new openai-agents SDK, we use very similar code, but we must use the AsyncOpenAI client from openai instead. Lately, Python AI packages are defaulting to async, because it's so much better for performance.
Now let's look at all of the packages that make it really easy for us, by allowing us to directly bring in an instance of either OpenAI or AsyncOpenAI.
For PydanticAI, we configure an AsyncOpenAI client, then construct an OpenAIModel object from PydanticAI, and pass that model to the agent:
For Semantic Kernel, the code is very similar. We configure an AsyncOpenAI client, then construct an OpenAIChatCompletion object from Semantic Kernel, and add that object to the kernel.
I saved Llamaindex for last, as it is the most different. The Llamaindex Python package has a different constructor for OpenAI.com versus OpenAI-like servers, so I opted to use that OpenAILike constructor instead. However, I also needed an embeddings model for my example, and the package doesn't have an OpenAIEmbeddingsLike constructor, so I used the standard OpenAIEmbedding constructor.
In all of the examples above, I specified the "gpt-4o" model. The "gpt-4o" model is a great choice for agents because it supports function calling, and many agent frameworks only work (or work best) with models that natively support function calling.
Fortunately, GitHub Models includes multiple models that support function calling, at least in my basic experiments:
gpt-4o
gpt-4o-mini
o3-mini
AI21-Jamba-1.5-Large
AI21-Jamba-1.5-Mini
Codestral-2501
Cohere-command-r
Ministral-3B
Mistral-Large-2411
Mistral-Nemo
Mistral-small
You might find that some models work better than others, especially if you're using agents with multiple tools. With GitHub Models, it's very easy to experiment and see for yourself, by simply changing the model name and re-running the code.
So, have you started prototyping AI agents with GitHub Models yet?! Go on, experiment, it's fun!