Your headphones just got two cameras, meet Project Motoko
html PUBLIC “-//W3C//DTD HTML 4.0 Transitional//EN” “http://www.w3.org/TR/REC-html40/loose.dtd”

Razer Project Motoko is a wireless AI headset concept that adds dual first person cameras at eye level to something you already wear for games, calls, and streams. Razer showed it at CES 2026, and the bet is that your headset can do more than audio when so much of life is now recorded.

Motoko pairs those cameras with near and far field microphones, so it can capture what you see and what’s happening around you. Razer also describes it as a front end for multiple AI services, with the idea that you can switch assistants based on the task.

Recommended Videos

It’s still a concept, but there’s a near-term milestone. Razer is taking sign-ups for a Developer Kit it expects in Q2 2026, which is the first real hint at how serious this is beyond the show floor.

Two cameras, one viewpoint

The defining choice is placement. The cameras sit where your eyes are, not on your monitor or on your desk, so the footage matches your natural perspective. Cameras are also no strangers in Razer’s product offering, as the Leviathan V2 Pro has infrared cameras that track your head and ears.

Razer links the stereo setup to depth cues and scene understanding, which is a different goal than a basic webcam. With the microphones in the mix, Motoko is meant to capture both the visual context and the audio context in front of you.

If it works well, it could make hands-free POV capture feel less like a compromise and more like a default. If it doesn’t, it becomes another gadget that only looks good in controlled demos.

Razer

Why streamers and robot builders care

Creators will notice the obvious upside. Eye-level video can look more natural than chest mounts, and it can move with you from desk to couch to IRL without rebuilding your setup every time.

Developers get a different hook. Razer explicitly points to computer vision workflows and robotics, and it talks about collecting point of view data that includes depth and attention patterns. That positioning puts Motoko closer to a wearable sensor platform than a creator accessory.

The multi-assistant angle matters most here. If Motoko can reliably hand off tasks to different AI back ends, it may fit a wider range of experiments than a single locked-in assistant.

What to watch next

The Developer Kit will either validate the idea or expose its limits fast. If it arrives in Q2 2026 and developers ship prototypes, you’ll see whether stereo POV is genuinely useful or mostly novelty.

After that, the missing specs become the story. Camera resolution, field of view, onboard processing, battery life, and where video is stored or sent will shape the privacy and practicality questions. Those details decide whether Project Motoko turns into a real tool or stays a CES concept.

Need help?

Don't hesitate to reach out to us regarding a project, custom development, or any general inquiries.
We're here to assist you.

Get in touch