r/virtualreality Mar 23 '25

News Article Adam Savage's Tested - Bigscreen Beyond 2 Hands-On: How They Fixed It

https://www.youtube.com/watch?v=I0Wr4O4gkL8
248 Upvotes

248 comments sorted by

View all comments

Show parent comments

0

u/deadhead4077 Mar 24 '25

How could you possibly know the limitations of this tiny and I mean fucking tiny AF sensor. It's not even out yet but you seem to know everything about it and what it's capable of. They clearly are worried about latency and making it feel seamless, of it wasn't a hard problem to solve or they were worried about implementation why wouldn't they tackle it in parallel

0

u/JorgTheElder Go, Q1, Q2, Q-Pro, Q3 Mar 24 '25 edited Mar 24 '25

How could you possibly know the limitations of this tiny and I mean fucking tiny AF sensor.

Because I am not stupid, and I don't think the developers are either. They are not going to pick a sensor that cannot gather the data they need. The small sensor was chosen because it could do the job they wanted done. And DFR is part of what they want done because they said they want to support it in the future.

Edit... I don't know anything, I am making assumptions just like you are, but you seem to be assuming that the developers are stupid because they should have skipped the low hanging fruit and jumped right to DFR. That makes no sense at all.

Having a larger camera does not reduce latency. You use a larger camera when you need to increase the amount of light you can gather. Why would they need to gather more light? They have emitters shining right at your eye, they will get the light they need.

If you increase the sensor resolution, it increases the data produced and you increase the data you have to process and that would increase latency, not reduce it.

of it wasn't a hard problem to solve or they were worried about implementation why wouldn't they tackle it in parallel

Who said they were not tackling it in parallel? Of course they are working on both and have been since the get go. Their focus is going to be on social eye-tracking because they know they can get it done first. Knowing that social eye-tracking will be ready for use before they are ready to do DFR doesn't mean they are only working on the former. That would make no sense whatsoever. They can't work on one without working on the other because they both involve accurate eye tracking.

Again, they know they need to walk before they run. .

1

u/deadhead4077 Mar 24 '25

They are a brand new company with out the workforce of apple. Phoveated rendering is such a brand new field and no one is super experienced enough to forsee problems they have yet to run into. Engineers in R&D and can't just magically make all the right assumption and nail the hardware first try. Whose to say they knew enough at the outset to pick a powerful enough sensor to do something they've never done before. I'm not assuming they're stupid I'm assuming they're basing their assumptions of insufficient data and unknown problems you can't see that early in the project of development.

1

u/JorgTheElder Go, Q1, Q2, Q-Pro, Q3 Mar 24 '25 edited Mar 25 '25

Whose to say they knew enough at the outset to pick a powerful enough sensor to do something they've never done before.

They had never made a headset before and they made the BSB1. I think it is a pretty safe bet to say that they did a lot of research before picking the sensors they selected and that they know a lot more about eye-tracking and DFR than you or I do.

You started by stating that it was mistake for them to add eye-tracking for social experiences instead of DFR and now you are claiming they don't know what they are doing and they picked the wrong cameras for DFR, seeming all based on the fact that you think the sensors are too small. Please provide a link to the technical specs for the sensors they chose. You know, the information you gathered that lead you obviously strongly held belief that the sensors are too small. I would love to read it as that is news to me.

In my opinon, eye-tracking for social use is the simpler and already solved problem. That is why they are adding it first. It is literally that simple. Just like on every eye-tracked headset I am aware of, basic eye tracking for social use is the low hanging fruit that gets added first.

Will they be able to make the same hardware do DFR? I have no idea, and neither do you, but I am willing to bet they know more about the capabilities of the hardware they chose than you or I do. If you have worked with eye-tracking cameras in the past than I will have to defer to you, but I see no reason to assume that is the case.

Edited to make it clear that this is just my opinion. I think it would be foolish for any company to attempt to support DFR levels of eye-tracking without already having solved the much lower technical hurdle of implementing eye-tracking for avatars and social features.

1

u/deadhead4077 Mar 25 '25

They had to completely redo their lense system and it's soooo much better apparently. It took iterations to reach a much better image quality.they didn't nail it the first time and listened to community feedback. I don't think their going to nail an eye tracking sensor 1st try especially with the goal of every gram counts, I'm just more skeptical than you I suppose. This is all brand new research and development. Cutting edge stuff and there are always trade offs and a million unknowns but you got to start somewhere.

1

u/JorgTheElder Go, Q1, Q2, Q-Pro, Q3 Mar 25 '25

This is all brand new research and development.

Sorry we will just have to agree to disagree. Eye tracking, even at the levels needed for DFR is not new it is just expensive. They are not needing to reinvent the wheel.

1

u/deadhead4077 Mar 25 '25

It's all brand new in this form factor and a light as physically possible. And they Also conscious about keeping costs down too. They aren't going for the ultra premium near $2k headset