国产av日韩一区二区三区精品,成人性爱视频在线观看,国产,欧美,日韩,一区,www.成色av久久成人,2222eeee成人天堂

Table of Contents
Terminology and Methodology with Organoids
Home Technology peripherals AI Beyond Computer Vision, Brains In Jars, And How They See

Beyond Computer Vision, Brains In Jars, And How They See

Jul 03, 2025 am 11:13 AM

Beyond Computer Vision, Brains In Jars, And How They See

But now we have this whole other dichotomy between neural networks, which are themselves only really 10 years old or so in terms of evolved systems, and new biological organoid brains developed with biological materials in a laboratory.

If you feel like this is going to be deeply confusing in terms of neurological research, you’re not alone – and there are a lot of unanswered questions around how the brain works, even with these fully developed simulations and models.

Terminology and Methodology with Organoids

A couple of weeks ago, I wrote about the process of growing brain matter in a lab. Not just growing brain matter, but growing a small pear-shaped brain, which scientists call an organoid, that apparently can grow its own eyes.

Observing that sort of strange phenomenon feeds our instinctive tendency to connect vision to intelligence – to explore the relationship between the eye and the brain.

People on both sides of the aisle, in AI research and bioscience, have been looking at this relationship. In developing some of the neural net models that are most promising today, researchers were inspired by the primitive visual capabilities of a roundworm called C elegans, which famously led to the development of some kinds of AI and medical research.

Back to the production of biological brain-esque organoids, in further research, I found that scientists use stem cells and something called “Matrigel” – and that this developed from a decades-long analysis of tumor material in lab mice. There’s a lot to unpack there, and we’ll probably hear a lot more about this as people realize that these mini-brains are around.

Exploring Vision and Intelligence ---------------------------------

One of the tech talks at a recent Imagination in Action event also piqued my interest in this area. It came from Kushagra Tiwary, who talked about exploring “what if” scenarios involving different kinds of evolution.

“One of the first questions that we ask is: what if the goals of vision were different, right? What if vision evolved for completely different things? The second question we're asking is: We all have lenses in our eyes, our cameras have lenses. What if lenses didn't evolve? How would we see the world if that didn't happen? Would we be able to detect food? … Maybe these things wouldn't happen. And by asking these questions, we can start to investigate why we have the vision that we have today, and, more importantly, why we have the visual intelligence that we have today.”

He had one more question. (Two more questions, really.)

“Our brains also develop at kind of the same pace as our eyes, and one would argue that, you know, we really see with our brains, not with our eyes, right? So what if the computational cost of the brain were much lower?”

He talked about the brain/eye scaling relationship, and key elements of how we process information visually.

Then Tiwary mentioned that this could inform AI research as we build agents in some of the same ways that we humans are built ourselves.

Computer Vision, Robotics, and Industrial Applications ------------------------------------------------------

At the same event, another standout talk came from Annika Thomas, a researcher working at the intersection of computer vision, robotics, and 3D scene understanding. Her focus: enabling collaborative visual intelligence for multi-agent systems operating in complex environments—from disaster zones to distant planets.

She described how today’s robots often operate like “solo travelers,” each building its own map of the world with little awareness of others around them. Her work focuses on changing that—teaching robots to “see intelligently” and to share what they see.

Thomas discussed a technique called Gaussian splatting, which allows robots to build fast, photorealistic 3D maps. Inspired by how our brains process visual information, this method helps teams of robots localize, recognize objects, and collaborate more effectively—whether they’re mapping a forest, packing boxes in a warehouse, or planning autonomous missions on the Moon. “It’s as if we’ve given them a shared consciousness about space,” she said. The vision she laid out—robots that see, understand, and act together—offers a glimpse of how collaborative AI could transform industries and reshape how we interact with intelligent machines.

The bottom line is that we have all of these highly complex models – we have the neural nets, which are fully digital, and now we have proto-brains growing in a petri dish.

Then we also have these bodies of research that show us things like how the human brain evolved, how it differs from its artificial alternatives, and how we can continue to drive advancements in this field.

Last, but not least, I recently saw that scientists believe we’ll be able to harvest memories from a dead human brain in about 100 years, by 2125.

Why so long?

I asked ChatGPT, and the answer that I got was threefold – first, the process of decomposition makes the job difficult, second, we don’t have full mapping of the human brain, and third, the desired information is stored in delicate frameworks.

In other words, our memories in our brain are not in binary ones and zeros, but in neural structures and synaptic strengths, and those can be hard to measure by any outside party.

It occurs to me, though, that if artificial intelligence itself has this vast ability to perceive small differences and map patterns, this type of capability may not be as far away as we think.

That’s the keyword here: think.

The above is the detailed content of Beyond Computer Vision, Brains In Jars, And How They See. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undress AI Tool

Undress AI Tool

Undress images for free

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Top 7 NotebookLM Alternatives Top 7 NotebookLM Alternatives Jun 17, 2025 pm 04:32 PM

Google’s NotebookLM is a smart AI note-taking tool powered by Gemini 2.5, which excels at summarizing documents. However, it still has limitations in tool use, like source caps, cloud dependence, and the recent “Discover” feature

Hollywood Sues AI Firm For Copying Characters With No License Hollywood Sues AI Firm For Copying Characters With No License Jun 14, 2025 am 11:16 AM

But what’s at stake here isn’t just retroactive damages or royalty reimbursements. According to Yelena Ambartsumian, an AI governance and IP lawyer and founder of Ambart Law PLLC, the real concern is forward-looking.“I think Disney and Universal’s ma

From Adoption To Advantage: 10 Trends Shaping Enterprise LLMs In 2025 From Adoption To Advantage: 10 Trends Shaping Enterprise LLMs In 2025 Jun 20, 2025 am 11:13 AM

Here are ten compelling trends reshaping the enterprise AI landscape.Rising Financial Commitment to LLMsOrganizations are significantly increasing their investments in LLMs, with 72% expecting their spending to rise this year. Currently, nearly 40% a

What Does AI Fluency Look Like In Your Company? What Does AI Fluency Look Like In Your Company? Jun 14, 2025 am 11:24 AM

Using AI is not the same as using it well. Many founders have discovered this through experience. What begins as a time-saving experiment often ends up creating more work. Teams end up spending hours revising AI-generated content or verifying outputs

The Prototype: Space Company Voyager's Stock Soars On IPO The Prototype: Space Company Voyager's Stock Soars On IPO Jun 14, 2025 am 11:14 AM

Space company Voyager Technologies raised close to $383 million during its IPO on Wednesday, with shares offered at $31. The firm provides a range of space-related services to both government and commercial clients, including activities aboard the In

Nvidia Wants To Build A Planet-Scale AI Factory With DGX Cloud Lepton Nvidia Wants To Build A Planet-Scale AI Factory With DGX Cloud Lepton Jun 14, 2025 am 11:17 AM

Nvidia has rebranded Lepton AI as DGX Cloud Lepton and reintroduced it in June 2025. As stated by Nvidia, the service offers a unified AI platform and compute marketplace that links developers to tens of thousands of GPUs from a global network of clo

Boston Dynamics And Unitree Are Innovating Four-Legged Robots Rapidly Boston Dynamics And Unitree Are Innovating Four-Legged Robots Rapidly Jun 14, 2025 am 11:21 AM

I have, of course, been closely following Boston Dynamics, which is located nearby. However, on the global stage, another robotics company is rising as a formidable presence. Their four-legged robots are already being deployed in the real world, and

What Is 'Physical AI'? Inside The Push To Make AI Understand The Real World What Is 'Physical AI'? Inside The Push To Make AI Understand The Real World Jun 14, 2025 am 11:23 AM

Add to this reality the fact that AI largely remains a black box and engineers still struggle to explain why models behave unpredictably or how to fix them, and you might start to grasp the major challenge facing the industry today.But that’s where a

See all articles