YASHICA 4K night vision binoculars open up a whole new world for you to explore

Unlike the daytime, people are split on what they think of the night. Some find solace in the rest that it offers, while others are wary of the dangers that lurk in the corners. The latter is mostly due to the uncertainty that the unknown brings to our minds, which is often associated with the dark of night. But nighttime as well as dark places are just as filled with treasures to discover, adventures to be experienced, and discoveries to be made, as long as you’re not stumbling in the dark, literally. Being able to see at night is often painted as a superpower, but you can actually gain that ability quite easily with today’s technologies. Harnessing decades of experience in optics and photography, YASHICA is opening the doors to new and exciting experiences with a pair of binoculars that brings the night to life in full color and stunning 4K quality.

Designer: YASHICA

Click Here to Buy Now: $139 $252 ($113 off). Hurry, less than 48 hours left! Raised over $250,000.

Catch every detail in 4K UHD

There have been cameras that can see in the dark of night for years now, but most of them fail to impress or captivate budding explorers. The majority can only see in green or monochrome hues, not to mention lack enough detail to really make you appreciate the wonderful world that the night holds. The YASHICA Vision easily sets itself apart from the crowd by breaking down these barriers to deliver a photography experience that’s truly out of this world, letting you see at night as if it were day.

YASHICA Vision reveals a radiant spectrum of colors, even under the most challenging lighting conditions.

With an impressive 0.0037lux sensitivity and F/1 wide lens aperture, the YASHICA Vision binoculars can take in as much light as they need to capture detailed, sharp, and high-resolution visuals. And thanks to advanced optics and a powerful CMOS sensor, these images won’t be stuck with a dozen shades of green or gray, painting the night in full color and creating a picture that you wouldn’t otherwise see with your naked eye. Best of all, you can record that picture or video in stunning 4K quality, leaving no detail unturned.

Clarity in complete darkness.

The YASHICA Vision further redefines night-time exploration with its remarkable aperture size of F/1. This feature is crucial as it allows for a higher light intake, especially under low-light conditions.

These qualities are more than enough for urban exploration, delving into creepy basements, or watching the coast in the dark of night, but the YASHICA Vision still has more to offer, especially for those who want to get close to nature in the dark. With the ability to see objects 600 meters away even in pitch darkness and a 3x optical zoom and 5x optical zoom, wildlife photography at night becomes not only possible but also safe and enjoyable. What’s even more impressive is that YASHICA Vision’s ability to see in full color is also made possible with the use of AI analyzing and understanding a vast amount of data to automatically improve the image by reducing noise, enhancing contrast, and compensating for light. This results in images with natural color reproduction and a higher dynamic range, even under low light and at low lux levels. There is almost literally nothing you can’t see in the dark, and the night becomes your playground rather than a source of fear and anxiety.

It might look like a pair of bulky binoculars, but the YASHICA Vision is a truly innovative photography device designed to accompany you on your nocturnal adventures. A 16-hour battery life and support for up to 512GB microSD cards promise very few downtimes as you go about your way in the dark. An intuitive and convenient binocular design allows users to have a comfortable and enjoyable time focusing on seeing instead of fumbling around the controls. Finally, a robust construction, an IP65 dust and water resistance rating, a built-in compass, and SOS guiding lights all mark the device as a reliable companion for your most daring exploits at night.

Whether you’re trying to discover what nature has to offer once the sun has set, trying to debunk urban legends and mysteries, or simply trying to enjoy the world after dark, the YASHICA Vision offers a ground-breaking tool that breaks wide open the doors to a whole new world filled with life, color, and wonders even in the dark of night.

Click Here to Buy Now: $139 $252 ($113 off). Hurry, less than 48 hours left! Raised over $250,000.

Lightweight XREAL Air 2 Ultra glasses deliver advanced VR experience at a fraction of price of the Apple Vision Pro

Talk of AR glasses and the first names that come to mind are Apple Visions Pro and Meta Quest 3. Priced exorbitantly, given their early stage of development, a lesser-priced alternative is bound to attract attention. That’s exactly what the $699 XREAL Air 2 Ultra is with a shipping date slated sometime in March 2024 for early adopters.

The wearable accessories are a cross between AR glasses and smart spectacles, making them highly practical for real-life situations. At the ongoing event, we got a chance to experience the Air 2 Ultra with its directional audio technology and were impressed by the experience. Also, we resonated well with the vision of bringing augmented reality (AR) to everyone. No doubt they won our “Best of CES 2024” award at the mega event!

Designer: XREAL

These new fashion-forward glasses are lighter at 72 grams compared to the 80 grams of the earlier version. The display like the Air 2 is 1080p at a refresh rate of 120Hz and 500 nits brightness. A worthy upgrade comes in the form of 52 degrees FOV and the 42 pixels per degree which is even better than the Apple Vision Pro. It has also been improved to get an additional pair of cameras on each side for six degrees of freedom and positional tracking. This enables interaction with both hands for a surreal experience and applications like 3D mesh creation and future-proof AI capabilities.

Talking of the mixed reality experiences that developers can create, the company has laid much focus on the spatial computing aspect. To that accord, the Air 2 Ultra comes with a suite of tools for developers like the Nebula, an in-house developed AR environment launcher and the latest SDK. Given their smaller size, comfortable form factor and new in-frame sensors; the developers will be more than eager to put that hardware to use for unique mixed-reality applications.

XREAL has also proactively partnered with Qualcomm Technologies, BMW Group, NIO, Quintar, and Forma Vision to create niche spatial computing interfaces. These come in the form of navigation instructions, hazard warnings, holographic meetings, or entertainment content.

Mercedes Benz Vision iMobility combines style and functionality for ultra-relaxed commutes in urban landscape

The future of level 5 autonomous mobility is largely going to revolve around relaxation and the whole experience of getting from point A to B.  Inspired by the protective shell of turtles, the Mercedes Benz Vision iMobility concept emphasises efficiency and resilience in a smooth curvy design. The form factor in particular is a combination of style and functionality, creating a bridge between cutting-edge automotive technology and the organic beauty of nature.

The designer imagines this vehicle to dot the landscape of four-wheelers in the year 2050 where cars communicate with the traffic systems, other vehicles and urban environment to optimize routes and bump-up safety. A time when autonomy, connectivity and sustainability are the driving force of innovation. Richard states, “Vision iMobility isn’t just a means of reaching a destination; it’s a dynamic and adaptive space that caters to individual needs.”

Designer: Richard Huang

The nature-inspired details don’t end there as the headlights get the reminiscence of a dwarf arrowhead flower. The front-open design of the vehicle makes it easy for the occupant to enter the cozy relaxing interiors and the compact shape gives the iMobility flexibility in tight urban spaces. Running out of juice in this electric vehicle is out of the question as it can be charged wirelessly just by parking on the readily available charging junctions.


Since we are talking about complete autonomy, there’s no need for driving hardware like a steering wheel, brakes or accelerator. The interior space is rather a personalized oasis for laying back, gaming in VR or simply exploring different realities in metaverse worlds. The small size no a limitation as there is enough tactically designed space for sleeping in comfort. In fact the interior is flexible enough to be arranged for different scenarios. Mercedes Benz Vision iMobility truly becomes an extension of the personal space for self-expression and fulfillment of the rider.

Richard has mustered up the idea of this compact autonomous vehicle for east Asian workforce who have to deal with demanding work cultures, long working hours and intense competition. I believe this EV could be the perfect personal transportation for individuals living in any urban space.

Apple Vision Pro Air Typing experience takes a small step toward usability

It’s truly mind-blowing to see virtual objects floating before our eyes, but the magic and illusion start to break down once we try to manipulate those objects. Input has always been a tricky subject in mixed reality, either because we can’t see our actual hands or we can’t feel what we’re supposed to be touching, which is physically nothing. Until the perfect haptic feedback gloves become a reality, we have to make do with tricks and workarounds to make input less awkward and more convenient. That’s especially true with typing on air, and Apple is apparently using some special techniques to offer a more usable experience on the Vision Pro mixed reality headset.

Designer: Apple (via Brian Tong)

Apple’s first teaser for the Vision Pro headset and visionOS platform didn’t show typing of any sort. It focused, instead, on icons, windows, and menus, virtual 3D objects that are easier to interact with using hand gestures. Of course, sooner or later you will be faced with the need to input text, and the usual method of voice recognition won’t always cut it. visionOS, fortunately, does include a virtual floating keyboard like other VR systems, but the way you use it is quite special and, to some extent, ingenious.

For one, you can interact with the keyboard like you would any part of the Vision Pro’s interface, which is to look at the UI element to focus on it and then use hand gestures. In this case, pinching a letter is the equivalent of selecting it, just like what you’d do for menu items or icons in visionOS. It makes the gesture grammar consistent, but it’s also an awkward way to type.

You can also “peck” at the keys with your fingers, making you feel like you’re typing on air. The difference that the Vision Pro makes, however, is that it tricks your eyes into believing you’re actually pressing down on those keys. Thanks to Apple’s flavor of spatial computing, hovering your real-world finger on a virtual key makes that key glow, and tapping on it results in an animation that looks like the key is actually moving down, just like on a real keyboard. There’s also a haptic sound, similar to the clicking sound effect you’d normally hear on an iOS virtual keyboard, to complete that audiovisual illusion.

Of course, your fingers aren’t actually hitting anything physical, so there’s still a disconnect that will probably confuse your brain. The visual effect, which is really only possible thanks to spatial computing, is still an important step forward in helping our minds believe that there’s a “real” three-dimensional object, in this case, a keyboard, right in front of us. It’s not going to be the most efficient way to input text, but fortunately, you can connect a wireless keyboard to the Vision Pro and you’ll be able to see your actual hands typing away on it.

Is the Apple Watch Series 9 secretly going to become the new Controller for the Vision Pro headset?

As Apple revealed the latest fleet of the Apple Watch collection, one feature stood out as the most remarkable as well as the most intriguing. The Watch Series 9 and Watch Ultra 2 both boasted of a new gesture input – being able to tap your fingers twice to register a button press. This would work remarkably well if your hands were occupied or dirty, letting you answer/end calls, snooze alarms, play/pause music, and even trigger your iPhone shutter simply by tapping your index finger and thumb together… without touching your Apple Watch at all. Sounds impressive, but also sounds extremely familiar, doesn’t it? Because tapping your fingers is exactly how the Apple Vision Pro registers click inputs too.

Designer: Apple

When Apple debuted the Vision Pro at WWDC in June, their biggest claim was that the Vision Pro was an entirely controller-free AR/VR headset, letting you manipulate virtual objects using just your hands. However, news emerged that Apple was, indeed, figuring out a traditional controller substitute that would be much more reliable than just human hands. It seems like the Apple Watch could be that perfect alternative.

The Watch Series 9 and Watch Ultra Series 2 were unveiled this year, with a few standout upgrades. Both watches now come with 2000 Nits peak brightness, doubling last year’s capabilities. They both also rely on the new S9 SiP (the watch’s dedicated chipset) which now runs Siri locally on the device, without relying on the internet. The watches are also accompanied by new bands, including the FineWoven fabric that now replaces all leather accessories in Apple’s catalog… but more importantly, both the Watch Series 9 and Watch Ultra Series 2 accept the new finger-tapping gesture that does what the home button on both watches would do. The feature’s due to roll out next month as Apple calibrates how it works… but the implications of the feature go beyond just the watch. In fact, the Watch could be the secret controller the Vision Pro truly needs to enhance its Spatial Computing Experience.

Sure, the Vision Pro has multiple cameras that track your environment, also keeping an eye on your hands to see where you’re pointing, tapping, and pinching. The big caveat, however, is any situation where the Vision Pro CAN’T see your hands. If you’ve got your hands under a table, in your pocket, or behind your back, the Vision Pro potentially wouldn’t be able to recognize your fingers clicking away… and that’s a pretty massive drawback for the $3500 device. Potentially though, the Apple Watch helps solve that problem by being able to detect finger taps… although only on one hand.

The way the ‘Double Tap’ feature works on the watch is by relying on the S9 SiP. The chipset uses machine learning to interpret data from the accelerometer, gyroscope, and optical heart sensor to detect when you tap your fingers twice. The feature only works with the hand that’s wearing the Watch (you can’t tap your right-hand fingers while the Watch is on your left hand), but even that’s enough to solve the Vision Pro’s big problem. Moreover, the new Ultra Wide Band chip on the watch can help with spatial tracking, letting your Vision Pro when your hands are in sight and when they aren’t. While Apple hasn’t formally announced compatibility between the Watch and the Vision Pro, we can expect more details when Apple’s spatial-computing headset formally launches next year. The Vision Pro could get its own dedicated keynote event, or even be clubbed along with the new iPad/MacBook announcements that often happen at the beginning of the calendar year.

What if Instagram Went Spatial? Unofficial UI on Apple Vision Pro Shows How

Unofficial Instagram UI for Apple Vision Pro

The internet sure has a short memory. It’s barely been 3 months since Apple debuted the Vision Pro and it pretty much looks like we’ve entirely forgotten about it. However, people experimenting with the developer kit seem to be incredibly impressed with its underlying tech (some even let out audible gasps when they tried the Vision Pro out). So while the hardware device is still a while away from officially hitting the shelves, it’s safe to say that developers are excited to build spatial-ready versions of their apps, platforms, websites, and games. Earlier last month we looked at an unofficial Spotify UI for the Vision Pro, and it seems like we’ve now got a taste of what Instagram would look like through Apple’s headset.

Designer: Ahmed Hafez

Visualized by Cairo-based designer Ahmed Hafez, this Instagram UI comes with neutral frosted glass elements that allow the content to stand out against the background. This approach works rather wonderfully in the spatial world as the contrast allows you to easily see text and elements whether you’re in an illuminated space or even a dimly lit one. Theoretically, it looks like Apple may have ended the “light-mode/dark-mode” UI debate by just making everything frosted.

The interface looks a lot like Instagram’s desktop (and even now its iPad) interface. It’s wider than its mobile counterpart, and comes with menus on the left and content on the right. You can view stories on the upper carousel, or even move higher up to access follow requests, close friends, notifications, and DMs.

The fix for the light vs. dark issue is present in the interface too. While the glassy elements don’t change color, you can alternate between white or black text for better visibility. The interface, however, isn’t traditionally landscape. It’s still quite vertical, which is perfect for spatial computing because you can merely move it to the side and have other tabs/apps open – a promise that Apple made rather clearly with their WWDC keynote.

The Vision Pro is still at least half a year away from formally being available to consumers, although rumors say that Apple’s seeing quite a few roadblocks with its production and plans on cutting the number of production units drastically from its original 400,000 units down to 150,000. That being said, the company isn’t giving up on the idea any time soon, and the Vision Pro is mainly paving the way for a Vision Air device that will be much more affordable. Before that happens, though, it’s important for developers to create a strong app ecosystem to justify the shift from physical computing to spatial computing. This fan-made IG interface is the first step in that direction!

Unofficial Instagram UI for Apple Vision Pro

Everyday Products Get a Magnificent ‘Retrofuturistic’ Upgrade Through AI’s Vision

Sci-fi’s entire endeavor has been to imagine worlds in alternate futures, alternate realities, and alternate universes. To plot timelines, envision technologies, and build entire lifestyles and societies around them. It isn’t easy, which is why good sci-fi is hard to come by… but with a little help from AI maybe things will be a little easier. Created as a broad part of Vadim Sadovski’s ‘Alternate Reality Retrofuturism’ series, these products imagine life in a world where steampunk and transparency co-mingle as dominant design themes. Sadovski relied on AI tools like Midjourney to help envision these products, all of which have a similar otherworldly aesthetic that’s filled with detail and chaos. There’s little method to the visual madness that is this series, but if there’s one thing we can all agree on, every image here is absolutely fascinating, showing how powerful these AI-generative art tools can be with the right prompt.

Designer: Vadim Sadovski

If there’s one thing that’s incredibly challenging to do with AI tools, it’s to make it imagine things it hasn’t seen before. AI image generative tools rely on their databases to create final results, and more often than not, it’s difficult to find images of relevance – case in point being something like a transparent camera. What Sadovski’s pulled off here is quite impressive, considering the AI’s done a remarkable job of not only rendering a transparent housing but also the components underneath. The camera above is a stellar example of how good these tools have gotten. Right below is a Sony loudspeaker/amplifier.

A vacuum cleaner above and a helmet below show the capabilities of the AI in creating uniquely detailed products. Transparent vacuum cleaners, or semi-transparent helmets aren’t a thing, but with just the right prompt and enough trial and error, it’s easy to get the AI to visualize something remarkable.

Retrofuturism, however, is more than just turning products transparent – it’s about combining retro and futuristic elements to create something so absurdly beautiful, it seems like it’s from an alternate universe. The Macintosh computer above and the Netflix TV below are a prime example of retrofuturism done right – at least visually. There’s an overwhelming debate about whether AI art is actually art, or whether AI creations deserve the same amount of merit as real human-made creations… but if there’s one thing that’s certain, the AI does a phenomenal job when it comes to being imaginative. Sometimes that may result in ‘hallucinations’ which involve the AI jumbling up things by putting 7 fingers on a hand instead of the usual 5. However, I firmly believe the AI’s job is to just help boost our own imagination, not replace it.

Sadovski’s approach to this series looks beyond conventional gadgets too. A cat bed above, and a human bed (or resting chamber/pod) below show how the AI works across different categories. There’s an undeniable ‘outer space’ vibe to both beds, which come with enclosed glass chambers and what seems like an air filtration unit for supplying the occupant with fresh air.

If you want to check out the rest of Vadim Sadovski’s AI series, follow him on Instagram.