Scaling efforts for neglected approaches to AI alignment
About AE Studio
Scaled to over 150 people as a profitable bootstrapped business delivering technically complex software products
Sold our first internal startup to reinvest in BCI and AGI alignment
BCI expertise: Advancing and democratizing development by releasing open-source software like our Neurotech Development Kit and creating neural data simulators, as well as winning the Neural Latent Benchmark Challenge and working with leading BCI hardware companies, such as Blackrock Neurotech
Our network: Collaboration with a diverse array of tech startups, enterprise clients, and academic research institutions globally
Foundational goal: Solve AI x-risk with a focus on neglected approaches, such as enhancing human cognitive capabilities and building prosocial AI
Vision: Build an industrial research institution akin to 'Bell Labs' and the 'Manhattan Project'
Priority: Advancing Brain-Computer Interface (BCI) technologies to mitigate AGI alignment risks
Our Mission
Current Initiatives
Accelerating Brain-Computer Interface (BCI) technologies to mitigate AGI alignment risks, specifically focusing on enhancing human cognition as well as understanding human values.
Building Prosocial AI, based on the belief that an AGI with a model of its own attention and the attention of other agents/humans will be more likely to yield a prosocial equilibrium that can be sustained as its capabilities advance, determining the initial conditions required for research and development that ensures this awareness is developed as early as possible.
AE Studio was originally founded to increase human agency. In the beginning, its CEO recognized that humans struggle to think complete thoughts, doing battle with technology that distracts and interrupts. This led to the original “big, hairy, audacious goal” (BHAG) of building the future of human thought with BCI. No one can complete the potential world-changing thought in their heads if Meta interjects an advertisement and stimulates dopamine receptors to boot.
But now, there is a more imminent threat to human agency and the thoughts within our heads. As artificial general intelligence (AGI) approaches and large language models gain capabilities at accelerating speeds, failure to align these algorithms with the ethics and aspirations of human beings risks a loss of agency that is existential, unimaginable, and permanent.
We bootstrapped our business up to over 150 people, sold a startup, accelerated the BCI space towards a more agency-increasing horizon, and now are transitioning to AI Alignment research due to shorter timelines.
We believe we are uniquely poised to scale neglected approaches to alignment, given our successful consulting business which consistently delivers the best work from the best devs, designers and data scientists; as well as our impact so far on emerging neurotech research. Leveraging our comparative advantages, we are pursuing different neurotechnologies for alignment while continuing to accelerate the field, working with a Princeton lab on building prosocial AI, and further scaling our alignment aspirations by exploring collaborations with professors and independent alignment researchers and orgs.
Reach out to collaborate, apply to work with us, and support the development of AE’s research and work.
Eth Wallet Address
0x09d378Bc93122e6B6EEE3e8Aea6fE88F5f6aE632