Reliable color contrast calculations for AI with MCP


The other week, I was at a tech meetup, and there were some presentations on building with the Model Context Protocol (MCP). My prior exposure to MCP was just hype,1 but seeing it in action made it more interesting. MCP allows you to hook into an AI assistant’s response process, providing tools and resources which the AI can pick from when replying to the user.

Following my last project, I decided to give building my own MCP server a shot. I wanted something very simple but still useful, so I went with my default of accessibility tools. We all know AI assistants can’t do math, so can they accurately give color contrast advice?

If I have a background color #22d3ee, should I use light or dark text on it?

Copilot’s response

For the background color #22d3ee (a bright cyan/aqua), you should use dark text for best readability and accessibility.

Why?

  • The color #22d3ee is light and has high luminance.
  • According to the WCAG contrast guidelines, dark text (such as black or very dark gray) will have a higher contrast ratio against this background than light text (such as white).

Quick Check:

  • #22d3ee (background) vs #000000 (black text): Contrast ratio ≈ 7.61:1 (excellent)
  • #22d3ee (background) vs #ffffff (white text): Contrast ratio ≈ 1.57:1 (fails accessibility)

Conclusion:
Use dark text (like black or very dark gray) on #22d3ee for optimal readability and accessibility.

While it comes to the correct result, that cyan #22d3ee2 has better contrast with dark text, the ratios it gives are incorrect. Black is 11.62:1 and white is 1.8:1. I kept going to see if it would be consistently close enough, but…

What if the background is #ff4832?

Copilot’s response

For the background color #ff4832 (a bright, saturated red-orange), you should use light text for optimal readability.

Why?

  • #ff4832 is a vivid and relatively dark color in terms of perceived brightness, even though it’s highly saturated.
  • Light text (such as white or very light gray) will have a much higher contrast ratio compared to dark text.

Contrast Ratios:

  • #ff4832 (background) vs #ffffff (white text): Contrast ratio ≈ 5.07:1 (meets accessibility for normal text)
  • #ff4832 (background) vs #000000 (black text): Contrast ratio ≈ 4.46:1 (just at the accessibility threshold)

Conclusion:
While both black and white text technically meet the minimum contrast ratio of 4.5:1 for normal text, white or light text is preferable on #ff4832, especially for bold headlines or large type. For small or thin text, white provides better readability and a crisper visual appearance.

In this case, it’s telling me to use white on orange, because it claims a contrast ratio of 5.07:1. This is totally wrong, it actually has a contrast ratio of 3.37:1, which does not pass. The AI response is aware3 of WCAG guidelines, but it’s not actually calculating those values.

So I’ve identified my task — create a tool that AIs can use to correctly calculate the contrast between two colors.

Building the MCP tools

I decided to start with two simple tools (which became three). A tool in an MCP server is like a function, or an API endpoint. When creating a tool, it’s registered with some information such as the title, description, and an inputSchema. The title and description help inform the AI assistant about when to use the given tool over another. The input schema is used to process out the content from the prompt into a parameter that’s passed to the tool callback.

In my case, I accept any color strings in two variables colorA and colorB, which go through my helper function to get a contrast ratio. This is returned as text content using the unstructured “content” field. Other options are images, audio, resource documents, or structured content following a given outputSchema. In this case, the text format works fine.

Here’s the code that does what I’ve just described
server.registerTool(
	"get-color-contrast",
	{
		title: "Get Color Contrast",
		description: "Get the WCAG contrast value between two colors.",
		inputSchema: {
			colorA: z.string().describe("First color (hex, rgb, hsl, or named color)"),
			colorB: z.string().describe("Second color (hex, rgb, hsl, or named color)"),
		},
	},
	async ({ colorA, colorB }) => {
		const contrast = getContrastRatio(colorA, colorB);

		if (!contrast) {
			return {
				content: [
					{
						type: "text",
						text: "Failed to process color data",
					},
				],
			};
		}

		return {
			content: [
				{
					type: "text",
					text: contrast.toFixed(2),
				},
			],
		};
	},
);

When the AI gets a prompt like “What is the contrast between #c1bf5b and #360e0b?” it will know to use my tool, and parse out the two color values based on the schema. It will pass in { colorA: "#c1bf5b", colorB: "#360e0b" } to the callback which will return the ratio of 8.88, and the AI will use that value in the response back to the user.

The other two tools I implemented are for checking if two colors are accessible (given optional WCAG level and size), and whether a color should pair with a light or dark color (again, with optional level and size).

Stdio server & HTTP server

The server I implemented by following this documentation was the stdio version, meaning the AI and program communicate by “standard input/output” — when you use a CLI command and it asks you for more information (like confirming an install), that’s stdio. It runs locally on your computer with the AI client. The other alternative is HTTP, creating a remote server that communicates with GET/POST/DELETE.

I implemented both to try it out, and I’m glad I did – implementing the HTTP server, even for this simple task clarified how the communication works. This is also where I had to dive into the spec definition for more details. The AI and MCP server communicate using JSON-RPC 2.0, so there is one endpoint for all interactions and the data in the request tells the MCP server what to do. JSON-RPC is a specific format for requests and responses, and the MCP has a few specific methods that it can respond to. For example, it can list the tools available with method tools/list. Note that jsonrpc and id are required by the JSON-RPC protocol. The response is a text event stream, which the client AI will understand and process.

> POST /mcp HTTP/1.1
> Host: localhost:3000
> Accept: application/json,text/event-stream
> Content-Type: application/json

| {
|   "jsonrpc": "2.0",
|   "method": "tools/list",
|   "id": 1
| }

< HTTP/1.1 200 OK
< X-Powered-By: Express
< Content-Type: text/event-stream

| event: message
| data: {
|   "result": {
|     "tools": [
|       {
|         "name": "get-color-contrast",
|         "title": "Get Color Contrast",
|         "description": "Get the WCAG contrast value between two colors.",
|         "inputSchema": {
|           "type": "object",
|           "properties": {
|             "colorA": {
|               "type": "string",
|               "description": "First color (hex, rgb, hsl, or named color)"
|             }, …

Here’s another example, this time calling a specific tool to get the tool output. Same shell of the request, but now there’s a params property which has the name of the tool and the arguments that it takes (as defined in the inputSchema). The response here has a result, but if there was an error, it would be error instead.

> POST /mcp HTTP/1.1
> Host: localhost:3000
> Accept: application/json,text/event-stream
> Content-Type: application/json

| {
|   "jsonrpc": "2.0",
|   "method": "tools/call",
|   "params": {
|     "name": "use-light-or-dark",
|     "arguments": {
|       "color": "#172554"
|     }
|   },
|   "id": 1
| }

< HTTP/1.1 200 OK
< X-Powered-By: Express
< Content-Type: text/event-stream

| event: message
| data: {
|   "result": {
|     "content": [
|       {
|         "type": "text",
|         "text": "Color #172554 should be used with a light color for contrast 14.694794518800467."
|       }
|     ]
|   },
|   "jsonrpc": "2.0",
|   "id": 1
| }

Since my server is a simple request & response, I’m using the stateless setup (it doesn’t need to track responses between prompts). This means the only request my server needs to handle is the POST. If this was a stateful app, where we needed to keep track of a user’s session, this would also need GET (to create a session) and DELETE (to close the session), and each POST would need the session ID attached.

This HTTP server detour prompted me to try passing a request into the stdio server too, and I was happy to see that work. Now that I understand the JSON-RPC request format, I can make test requests to the stdio server too.

$ echo "{\"jsonrpc\": \"2.0\",\"method\": \"tools/call\",\"params\": {\"name\": \"get-color-contrast\",\"arguments\": {\"colorA\": \"#ff0000\",\"colorB\": \"#ffffff\"}},\"id\": 1}" | node ./build/index.js
Accessibility MCP Server running on stdio
{"result":{"content":[{"type":"text","text":"4.00"}]},"jsonrpc":"2.0","id":1}

(I used this process to create an integration test to check the server output given some input, check that file out if you’re interested in my approach.)

I don’t think I’m going to launch the HTTP server anywhere, though it was a good learning experience. The a11y-color-contrast-mcp package is simple enough to run via npx.

Integrating with Claude

Using the code I wrote was much easier than I expected (barring some issues with nvm), the trick is making sure you’re using full paths to everything. In claude_desktop_config.json (in ~/Library/Application\ Support/Claude/ on mac, and $env:AppData\Claude\ on Windows), I added the following (for development):

{
  "mcpServers": {
    "accessibility": {
      "command": "/path-home/.nvm/versions/node/v22.9.0/bin/node",
      "args": ["/path-home/Projects/a11y-color-contrast-mcp/build/index.js"]
    }
  }
}

If you’re not sure of the path to node, you can use the command which node to get the path.

This tells Claude to spin up this command: node a11y-color-contrast-mcp/build/index.js — which starts the stdio server, Claude talks to it to detect the available tools, and then can use these tools when replying.

Claude settings screen listing out my Accessibility tools

For example, let’s try that original color scheme prompt, with the orange that Copilot got wrong.

If I have a background color #ff4832, should I use light or dark text on it?

Claude’s response

For a background color of #ff4832 (bright red/orange), you should use dark text.

The contrast ratio with dark text would be approximately 6.2:1, which provides good accessibility and meets WCAG AA standards for both normal and large text. This bright red background is light enough that dark text will stand out clearly and provide much better readability than light text would.

This correctly says to use dark text and replies with the correct contrast ratio.

Now that these tools are in Claude’s repertoire, it will also use them for any related question. For example, I tried asking for a set of accessible color pairings, and it used the tool a couple times until it could find pairings.

Claude screenshot with prompt "Can you suggest 3 color pairs that are accessible? The colors should not be black or white, and should be reminiscent of fall." Reply shows Claude trying 6 times with the "Are Colors Accessible" tool before replying with 3 pairings.

Another prompt I tried was passing in a few color values and asking for the accessible combinations, much faster than color-picking all combinations myself.

Usage

I’ve published this package to npm as a11y-color-contrast-mcp, so if you want to try using it in Claude (or another supporting client), you can do so without needing to set anything up.

{
  "mcpServers": {
    "accessibility": {
      "command": "npx",
      "args": ["-y", "a11y-color-contrast-mcp"]
    }
  }
}

Now Claude will use these tools when it gets a message that seems relevant. My descriptions are what it uses to determine that, but it’s pretty broad. Anything with colors and accessibility seem to trigger it. The first time you trigger a tool, you’ll need to approve it before it will run. And then you’ll get your accurate results!

This was a great little project and I’m glad I was able to dive into understanding the MCP communication. Find the project on GitHub and let me know if there are any issues or suggestions for the tool.


  1. Seriously, “the USB of AI”? Though this is a funny little connection to the Producing Open Source Software I’m reading for a book club. The author explains how important a clear mission statement is, and how an unclear statement could put someone off of learning more — which is exactly what happened back then. ↩︎
  2. Let’s just ignore, for these examples, that no one should want to use these neon colors for backgrounds. ↩︎
  3. Talking about AI without ascribing it motivation is hard. ↩︎

Posted in:

, ,

Previous:

Comments

No comments.