Researchers from the LNM Institute of Information Technology and the Indian Institute of Information Technology recently proposed a new model for AI—one that goes through a human life cycle. The team focused on pairing different AI systems with their functional equivalents in the human brain. By mimicking our own biological advantages, the researchers believe that AI could eventually become an ever-evolving assistant that learns to better suit individual users’ needs.
Freed from the bounds of biology, AI can learn from data at speeds incomprehensible to the human mind. And with that speed, AI can accomplish things that might normally take a human hours to complete, like analyzing a database of thousands of customer habits. But our brain, sculpted by millions upon millions of years of evolution, is more energy efficient, more adaptable, and can learn from a relatively limited pool of data—we don’t need to see thousands of images of horses to learn what a horse is.
To mimic these biological benefits, some scientists think we need to push AI more toward how the human brain itself functions. For some, that’s led to the idea of neuromorphic computing, which aims to replicate the human brain’s neural structures within computers; for others, it means tearing down current AI architectures and starting from scratch.
But a new peer-reviewed article published earlier this year in the International Journal of Transdisciplinary Research and Perspectives is taking a slightly different approach. Researchers are pairing AI systems with parts of the human brain, while also giving the AIs their own version of a human life cycle. According to the paper, the new AI system develops a personality, sleeps, dreams—and eventually dies. This means AI could essentially become an assistant that progressively adapts to its user, rather than just a rigid input-output machine.
In the paper, computer scientist Krrish Choudhary at the LNM Institute of Information Technology in Jaipur, India, along with his coauthor, Tanvi Kandoi from the Indian Institute of Information Technology, use a neuroscience-inspired approach to replicate the human mind. They draw parallels between AI models and nearly two dozen brain structures, processes, hormones, and neurotransmitters. For example, the visual cortex could be paired with Google DeepMind’s vision-language model (VLM), PaliGemma. Here, “REM sleep” would play out via synthetic generation—or the AI generating text, images, and videos the same way our brains create scenes while dreaming.
“The architecture described in the paper organizes intelligence into specialized subsystems, closely mirroring the functional layout of the brain,” Choudhary says in an email. “This differs sharply from neuromorphic computing—instead, our approach focuses on functional equivalence, using existing AI components to reconstruct the organizational logic of the brain.”




