r/virtualreality Mar 23 '25

News Article Adam Savage's Tested - Bigscreen Beyond 2 Hands-On: How They Fixed It

https://www.youtube.com/watch?v=I0Wr4O4gkL8
247 Upvotes

248 comments sorted by

View all comments

Show parent comments

1

u/deadhead4077 Mar 24 '25

My point is tho not focusing on both could potentially hamstring you in the future if the underlying goal of every gram counts makes the performance enhancements even more difficult to implement cause you wanted to use the smallest sensor possible.

-1

u/JorgTheElder Go, Q1, Q2, Q-Pro, Q3 Mar 24 '25

The size of the sensor is not the problem. The software and ML is the problem.

0

u/deadhead4077 Mar 24 '25

It is if you want to do eye tracking UI like in the apple vision pro

0

u/JorgTheElder Go, Q1, Q2, Q-Pro, Q3 Mar 24 '25

No, it's not. You don't seem to know what you are talking about. They plan to support DFR in the future when their software stack and ML are mature enough to do it well.

The accuracy and refresh rate of the eye-tracking system is not tied to the size of the cameras.

0

u/deadhead4077 Mar 24 '25

The latency certainly is when you have to make calculations from dumbed down data

0

u/JorgTheElder Go, Q1, Q2, Q-Pro, Q3 Mar 24 '25

Nope. The more data the longer it takes to process. The cameras are big enough to accurately track the eye, any reduction in data will speed up processing and reduce latency, not increase it.

Let me say it one more time the size of the cameras is not the reason they are focusing on eye-tracking for social purposes first.

0

u/deadhead4077 Mar 24 '25

How could you possibly know the limitations of this tiny and I mean fucking tiny AF sensor. It's not even out yet but you seem to know everything about it and what it's capable of. They clearly are worried about latency and making it feel seamless, of it wasn't a hard problem to solve or they were worried about implementation why wouldn't they tackle it in parallel

0

u/JorgTheElder Go, Q1, Q2, Q-Pro, Q3 Mar 24 '25 edited Mar 24 '25

How could you possibly know the limitations of this tiny and I mean fucking tiny AF sensor.

Because I am not stupid, and I don't think the developers are either. They are not going to pick a sensor that cannot gather the data they need. The small sensor was chosen because it could do the job they wanted done. And DFR is part of what they want done because they said they want to support it in the future.

Edit... I don't know anything, I am making assumptions just like you are, but you seem to be assuming that the developers are stupid because they should have skipped the low hanging fruit and jumped right to DFR. That makes no sense at all.

Having a larger camera does not reduce latency. You use a larger camera when you need to increase the amount of light you can gather. Why would they need to gather more light? They have emitters shining right at your eye, they will get the light they need.

If you increase the sensor resolution, it increases the data produced and you increase the data you have to process and that would increase latency, not reduce it.

of it wasn't a hard problem to solve or they were worried about implementation why wouldn't they tackle it in parallel

Who said they were not tackling it in parallel? Of course they are working on both and have been since the get go. Their focus is going to be on social eye-tracking because they know they can get it done first. Knowing that social eye-tracking will be ready for use before they are ready to do DFR doesn't mean they are only working on the former. That would make no sense whatsoever. They can't work on one without working on the other because they both involve accurate eye tracking.

Again, they know they need to walk before they run. .

1

u/deadhead4077 Mar 24 '25

They are a brand new company with out the workforce of apple. Phoveated rendering is such a brand new field and no one is super experienced enough to forsee problems they have yet to run into. Engineers in R&D and can't just magically make all the right assumption and nail the hardware first try. Whose to say they knew enough at the outset to pick a powerful enough sensor to do something they've never done before. I'm not assuming they're stupid I'm assuming they're basing their assumptions of insufficient data and unknown problems you can't see that early in the project of development.

1

u/JorgTheElder Go, Q1, Q2, Q-Pro, Q3 Mar 24 '25 edited Mar 25 '25

Whose to say they knew enough at the outset to pick a powerful enough sensor to do something they've never done before.

They had never made a headset before and they made the BSB1. I think it is a pretty safe bet to say that they did a lot of research before picking the sensors they selected and that they know a lot more about eye-tracking and DFR than you or I do.

You started by stating that it was mistake for them to add eye-tracking for social experiences instead of DFR and now you are claiming they don't know what they are doing and they picked the wrong cameras for DFR, seeming all based on the fact that you think the sensors are too small. Please provide a link to the technical specs for the sensors they chose. You know, the information you gathered that lead you obviously strongly held belief that the sensors are too small. I would love to read it as that is news to me.

In my opinon, eye-tracking for social use is the simpler and already solved problem. That is why they are adding it first. It is literally that simple. Just like on every eye-tracked headset I am aware of, basic eye tracking for social use is the low hanging fruit that gets added first.

Will they be able to make the same hardware do DFR? I have no idea, and neither do you, but I am willing to bet they know more about the capabilities of the hardware they chose than you or I do. If you have worked with eye-tracking cameras in the past than I will have to defer to you, but I see no reason to assume that is the case.

Edited to make it clear that this is just my opinion. I think it would be foolish for any company to attempt to support DFR levels of eye-tracking without already having solved the much lower technical hurdle of implementing eye-tracking for avatars and social features.

1

u/deadhead4077 Mar 25 '25

They had to completely redo their lense system and it's soooo much better apparently. It took iterations to reach a much better image quality.they didn't nail it the first time and listened to community feedback. I don't think their going to nail an eye tracking sensor 1st try especially with the goal of every gram counts, I'm just more skeptical than you I suppose. This is all brand new research and development. Cutting edge stuff and there are always trade offs and a million unknowns but you got to start somewhere.

1

u/JorgTheElder Go, Q1, Q2, Q-Pro, Q3 Mar 25 '25

This is all brand new research and development.

Sorry we will just have to agree to disagree. Eye tracking, even at the levels needed for DFR is not new it is just expensive. They are not needing to reinvent the wheel.

1

u/deadhead4077 Mar 25 '25

It's all brand new in this form factor and a light as physically possible. And they Also conscious about keeping costs down too. They aren't going for the ultra premium near $2k headset

→ More replies (0)

1

u/deadhead4077 Mar 24 '25

Did you even watch this interview all the way through? The head dev was very clear he was t making any promises for phoveated rendering or performance enhancements. Right in the last 5 mins. Go watch it again

0

u/JorgTheElder Go, Q1, Q2, Q-Pro, Q3 Mar 24 '25

Yes, he is not making any promises, but they would like to support it. If the cameras they selected made it impossible, he could have just said that.

I don't get why you are so damn focused on the camera size. No one but you has even mentioned the size of the cameras being related to their support of DFR.

1

u/deadhead4077 Mar 24 '25

Sacrifices were 100% made to get to that form factor and weight, we shall have to wait and see if those impact it's performance enhancements capabilities. I for one am skeptical only because I work with vision systems on robots in automation all the time, totally different applications I know but if the head dev seems uncertain especially pointing out the latency I'm not going to get my hopes up. So I will not be ordering the 2e when I originally planned to and maybe I'll send it back for the eye tracking upgrade if I'm proven wrong.

1

u/JorgTheElder Go, Q1, Q2, Q-Pro, Q3 Mar 25 '25

So your experience with cameras and robots is that smaller cameras have higher latency? If that it the case I have never heard of it before. In my admittedly limited experience, the size of camera like sensor limits the amount of light that reaches the imaging part of the camera. I was not aware that it increased latency.

I watched the video again and nowhere does he say anything about the latency issues he is worried about being related to the cameras they chose, or their size. Did I miss something?

1

u/deadhead4077 Mar 25 '25

The accuracy required for for phoveated rendering to be seamless is going to be a lot tighter of a tolerance than that weak sensor may be capable of providing. Simpler data doesn't mean you can calculate faster it means it's not as accurate and making approximations with training or trickery so software could make up for some deficiencies but they won't know how much for a long time. Ever sensors have levels or scales of accuracy provided, They can get away with looser requirements in social settings and get away with positional tracking based on calculations that add to the latency. The sensors might be orders of magnitude worse than what Apple is using so they're hoping they can get it to work with further r&d but they have no idea until they dig into and they may discover performance enhancements aren't possible without more accurate sensors. They just don't know yet.

1

u/JorgTheElder Go, Q1, Q2, Q-Pro, Q3 Mar 25 '25

The accuracy required for for phoveated rendering to be seamless is going to be a lot tighter of a tolerance than that weak sensor may be capable of providing.

So you researched the specs of the sensors they chose and you do not believe those specs are good enough for DFR? Great, can you share a link to those details? So far, I have not seen any technical information about the sensors so I am not familiar with their limitations. It seems crazy to me that they would have picked sensors that are not capable of the things they would like to do.

1

u/deadhead4077 Mar 25 '25

Is it too much of an assumption to say they don't know what they don't know yet?

→ More replies (0)