Apply here: https://jobs.polymer.co/hume-ai
Hume is a research lab and startup that provides the most advanced AI toolkit to measure, understand, and improve how technology affects human emotion. Where other AI companies see only words, we see the other half of human communication: subtle tones of voice, word emphasis, facial expression, and more. These behaviors reveal our preferences—whether we find things interesting or boring; satisfying or frustrating; funny, eloquent, or dubious. Trained with billions of human expressions, our LLMs will be better question answerers, copywriters, tutors, call center agents, and more.
Our goal is to enable a future where technology draws on an understanding of human emotional expression to better serve our goals and support our well-being. We currently provide API access to our expression measurement models to help developers build better healthcare solutions, digital assistants, communication tools, and more, optimizing their products for human well-being. We're also building a groundbreaking, first-of-its-kind empathic voice assistant.
We're seeking technical talent interested in working with our research team to build state-of-the-art LLMs and scale up our systems. Our new LLM training method - reinforcement learning from human expression (RLHE) - learns human preferences from behavior in millions of audio and video recordings, making LLMs superhumanly helpful, interesting, funny, eloquent, honest, and altruistic. Join us in the heart of New York City and contribute to our endeavor to ensure that AI is guided by human values, the most pivotal challenge and opportunity of the 21st century.
As part of our mission, we also conduct groundbreaking scientific research, publish in leading scientific journals like Nature, and support a non-profit, The Hume Initiative, that has released the first concrete ethical guidelines for empathic AI (www.thehumeinitiative.org).
You can learn more about us on our website (https://hume.ai/) and read about us in Axios (https://www.axios.com/2023/01/26/startup-ai-emotions-hume) and The Washington Post (https://www.washingtonpost.com/technology/2022/01/17/artific...).
At Intuitive we design, develop, and manufacture robotic products designed to improve clinical outcomes of patients through minimally invasive surgery, most notably the da Vinci Surgical System.
I lead the software team in the Advanced Product Development group at Intuitive, where we use the cutting-edge technology of today to build the surgical robotics systems of the future. This role is ideal if love working close to the hardware on a technically interesting/challenging embedded systems architecture project, and you value making the best possible product that will enhance clinical outcomes for millions of patients.
What to expect:
- Requirements gathering and collaborative embedded system architecture definition
- FPGA-based board bring up and debug
- RTOS virtualization configuration and performance tuning
- HAL-level driver design & updates (e.g. QNX, Linux, FPGA interfaces, PCIe switch)
- Software toolchain integration
- Design and implementation of an instrumented velocity controller, integrated with FMU model
- Develop detailed architecture documentation with performance data
Please reach out directly to discuss the role, dan.miller AT intusurg.com. Please include HN in the subject and your resume.