About

I'm Roger Wang, a software engineer broadly interested in ML research and systems. I care about building reliable, practical tools and systems that people actually want to use.

I'm a core maintainer of vLLM and vLLM-Omni, where I focus on building infrastructure to support large multimodal models and omni-modality models. I recently co-founded Inferact, a startup working to make AI inference cheaper and faster while growing vLLM as the world's AI inference engine.

For vLLM-specific collaboration and questions, please email me at rogerw@vllm.ai or join our Slack channel slack.vllm.ai. You can also reach me at hey@rogerw.io. I'm currently not open to new opportunities but I'm happy to chat about and collaborate on interesting open-source projects or research ideas.