with multimodal AI companion that can see, hear and read
Get Early Access
David is built using in-house technologies and doesn't rely or share data with external APIs such as OpenAI API.
What can you do, David?
I can manage your
Tasks
Estimate time required for the task, split into subtasks, track and help to complete
Reminders
Manage your calendar, remind about upcoming events, help through recordings and much more
Notes and documents
Memorize your notes and docs to make them instantly available to you
I can help with many things across many formats
Videos
Summarize, discuss and help find specific scenes in videos.
Audios
Extract action items and summary from meeting recordings, discuss and find specific moments in an audio.
Webpages
Browse articles, websites and other webpages to discuss, summarize and answer questions.
Images
Read text from the image, detect objects and answer questions about the photos.
Files
Read and analyze files to discuss, summarize and answer questions.
For example, this is how much you can achieve in just 4 minutes
Pull all ingredients for a shopping list from a recipe video using voice
watch
Get timing of specific exercises in 30 mins workout video
watch
Prepare detailed action items from meeting recording
watch
Create task with subtask breakdown and estimate
watch
Review current active tasks using voice
watch
First AI assistant with video understanding
I can see and hear videos and I'm able to summarize, answer questions and find scenes in a video.
×
Use case: Using voice and chat extract ingredients from the recipe video and exercises from a 30-min workout video
Speed is not accelerated, already loaded video shown, first time load can take 1-2 minutes
×
Coming soon: Real-time voice-to-voice interface to interract with any attachment, tasks, reminders and notes
Voice-to-voice speed is close to real-time depending on Internet connection, demo recorded on slow connection
First multimodal real-time voice interface
Unlike available real-time voice models that support only simple chatting my voice is integrated with attachments, tasks, reminders and notes just like my text chat interface
Designed for your tasks and chores with multi-step reasoning
I'm not a general purpose assistant. You don't need to write a detailed prompt every time you need something. I can predict your intent and automatically perform actions by handling all the heavyweight in the middle: from free spots in your calendar to subtask decomposition and estimation.
I can even self analyze my responses to improve them.
×
Use case: Using voice and chat check current tasks and create a new one with time estimate and subtask breakdown
×
Use case: Extract detailed action items with timestamps from meeting in less than 1 min
Rich interactive responses instead of plain text
I don't just output plain text. I can respond with audio or video timestamp links, video snapshots, gantt views, side notes to get additional information in one click, and many more providing a richer user experience
My memory as your second brain
Everything you share from notes to documents and things we discuss remains between us and automatically and instantly available to you via alternative second brain graph view interface