Posts tagged hardware

New Canon Camera Has An ISO Of Over 4 Million!

0

Capturing moments of our lives has always been an important element of human culture.  Before modern technologies existed, people told stories, then later learned to write those stories down. When cameras were invented, people suddenly had the opportunity to take snapshots of their life, whether spontaneous or artistic, that they could later admire. Nowadays, our phones enable us to easily combine still photos with video, yet there has always been one constraint to sharing and capturing that only storytelling isn’t affected by: the time on day, i.e., how much light there is at the time of day. Photos can have perfect composition but be ruined by bad lighting. On the other hand, lighting can be artistically manipulated to create different effects that can actually enhance the look (e.g., with filters or digital adjustments).

In photography, there is a technical measure of how much light you are letting into your camera aperture. Or in other words, the amount of sensitivity to bright or dim light the camera is set to when taking a picture. This measure is called the ISO, pronounced “i-sow”, and it is something that even film for early cameras had the ability to adjust. You could buy ISO 100 film for sunny photos, ISO 200 films for cloudy photos, and ISO 400 film for indoor shots. The higher the ISO, the more sensitive the camera is to lower brightness light. The same rules apply to video. Although older cameras only went up to an ISO of 400, nowadays more expensive cameras go into the thousands. Just recently, Canon released a camera that has the potential to rock the photography/videography world; not for it’s quality of photos and videos, although that is excellent too, but for it’s ISO, able to be set all the way to 4 million. 

The video below is about the CMOS sensor, which has been upgraded slightly over the past two years, but you can still see the incredible video quality.

You may be wondering what that even means. If an ISO of 400 is good for taking photos inside, and ISOs into the thousands are good for even darker lighting, what does and ISO of 4 million, that’s 10,000 times more sensitive that what’s needed for inside lighting, does? Well, it turns out that setting your camera to an ISO of 4 million allows you to literally shoot in the dark, effectively giving your camera night vision. Not infrared night vision where the picture looks like a color inverted iPhone, but real night vision, meaning you can film during the night and the video or image will look exactly the same as if you were shooting the day.

This technology was invented by Canon back in 2013 with their CMOS sensor, which just got integrated into Canon’s new camera, the Canon ME20F-SH. The camera is essentially just a cube with a lens, being surprisingly small, only around 4 inches across. It weighs two pounds, which is fairly heavy for a camera, but still allows the device to be used in a wide variety of situations and doesn’t inhibit its portability. Even though bringing the ISO up on regular cameras makes the video quality worse, the ME20F-SH still shoots at HD quality, allowing serious film-makers to use this camera for professional films. 

20150730_thumbL_me20fsh_3qlens

Specs aside, this camera opens up a whole new world of possibilities for film-makers. From cave explorers to experimental directors, this camera can be used for an incredible variety of ways simply for that fact that it can see in the dark. Now, the camera isn’t for amateur photographers or directors who simply want to get a clear night sky shot,  as  after all, the expected price of the camera is $30,000. But, for people who do have the ideas and also have the money, this camera may totally change the way they film. For the first time in the history of capture-based art and storytelling, light isn’t an obstacle.

Augmented Vs. Virtual Part 2 – Augmented Reality

0

Reality is very personalized, it is how we perceive the world around us, and it shapes our existence. And while individual experiences vary widely, for as long as humans have existed, the nature of our realities have been broadly relatable from person to person. My reality is, for the most part, at least explainable in terms of your reality. Yet as technology grows better and more widespread, we are coming closer to an era where my reality, at least for a period of time, may be completely inexplicable in the terms of your reality. There are two main ways to do this: virtual reality and augmented reality. In virtual reality, technology immerses you in a different, separate world. My earlier article on VR was the first of this two-part series, and can be found HERE.

Whereas virtual reality aims to totally replace our reality in a new vision, augmented reality does what the name suggests: it augments, changes, or adds on to our current, natural reality. This can be done in a wide variety of ways, the most popular currently being a close-to-eye translucent screen with projected graphics on top of what you are seeing. This screen can take up your whole field of view, or just in the corner of your vision. Usually, the graphics or words displayed on the screen is not completely opaque, since it would then be blocking your view of your real reality. Augmented reality is intrinsically designed to work in tandem with your current reality, while VR dispenses it in favor of a new one.

ViewAR_BUTLERS_Screenshot

An example of a consumer use case for tablet-based AR.

With this more conservative approach, augmented reality (AR) likely has greater near-term potential. For VR, creating a new world to inhabit limits many of your possibilities to the realm of entertainment and education. AR, however, has a practically unlimited range of use cases, from gaming to IT to cooking to, well, pretty much any activity. Augmented reality is not limited to, but for now works best as a portable heads-up display, a display that shows helpful situational information. For instance, there was a demo at Epson’s booth at Augmented World Expo 2015 where you got to experience a driving assistance app for AR. In my opinion, the hardware held back the software in that case, as the small field of view was distracting and the glasses were bulky, but you could tell the idea has some potential. At AWE, industrial use cases as well as consumer use cases were also prominently displayed, which included instructional IT assistance, such as remotely assisted repair (e.g., in a power plant, using remote visuals and audio to help fix a broken part). 

Before I go on, I have to mention one product: Google Glass. No AR article is complete without mentioning the Google product, the first AR product to make a splash in the popular media. Yet not long after Google Glass was released, it started faded out of the public’s eye. Obvious reasons included the high price, the very odd look, and the social novelty: people couldn’t think of ways they would use it. Plus, with the many legal and trust issues that went along with using the device, it often just didn’t seem worth the trouble. Yet rumor has it that Google is working on a new, upgraded version of the device, and it may make a comeback, but in my opinion it’s too socially intrusive and new to gain significant near-term social traction.

Although many new AR headsets are in the works (most importantly Microsoft’s HoloLens), the development pace is lagging VR, which is already to the stage where developers are focused on enhancing current design models, as I discussed in the previous VR article. For AR, the situation is slightly different. Hardware developers still have to figure out how to create a cheap AR headset, but a headset that also has a full field of view, is relatively small, doesn’t obstruct your view when not in use, and other complications like that. In other words, the hardware of AR occasionally interrupts the consumption of AR content, while for VR hardware, the technology is well on its way to overcoming that particular obstacle.

Beyond these near-term obstacles, if we want to get really speculative, there could be a time when VR will surpass AR even in pure utility. This could occur when we are able to create a whole world, or many worlds, to be experienced in VR, and we decide that we like these worlds better. When the immersion becomes advanced enough to pass for reality, that’s when we will abandon AR, or at least over time use it less and less. Science fiction has pondered this idea, and from what I’ve read, most stories go along the lines of people just spending most of their time in the virtual world and sidelining reality. The possibilities are endless in a world made completely from the fabric of our imagination, whereas in our current reality we have a lot of restrictions to what we can do and achieve. Most likely, this will be in a long, long time, so we have nothing to worry about for now.

Altogether, augmented reality and virtual reality both are innovative and exciting technologies and that have tremendous potential to be useful. On one side, AR will be most likely used more than VR in the coming years for practical purposes, since it’s grounded in reality. On the other hand, VR will be mostly used for entertainment, until we hit a situation like what I mentioned above. It’s hard to pit these two technologies against each other, since they both have their pros and cons, and it really just depends on which tech sounds most exciting to you. Nonetheless, both AR and VR are worth lots of attention and hype, as they will both surely change our world forever, for better or worse.

TechSpot: zSpace 3D CAD System

0

CAD (Computer-Aided Design) systems are key tools for hardware design. They make it easy to view and virtually manipulate the object you are designing. Think of Google SketchUp.  It is the most basic design software available, but you can still use it to make intricate designs. zSpace takes CAD to the next level.

While attending the Engadget Expand conference in San Francisco, I was lucky enough to try out zSpace on one of their animations.  zSpace really immerses you into your animation. You can move things around, virtually pick things up, disassemble items, turn them over and view them from all directions, using a stylus you hover in mid-air. Unlike 3D movies and TVs, if you turn your head or look at it from a different angle, it doesn’t get distorted. There’s less need to view it from different angles, because you can virtually rotate objects. But if someone comes up and wants to view the design in 3D (with lightweight glasses), they can look at it as well and the object won’t be distorted, it will just appear at a different angle, like in real life.

zSpace if also developer-friendly. You can get the SDK (software development kit) and design your own apps for it. There is even a contest where you design an app using zSpace’s Access Center’s system. If you win, you get a your very own zSpace system! If you do get a zSpace, applications extend across many fields.  zSpace is the start to the future of engineering, graphic design, product design, and lots of other occupations. zSpace also has apps for medical needs like a 3D human body, and they encourage doctors and medical students to take part in their developer program.

zspace-49dfaeb0962e949e7eee7b9dccf07f938a16dcc0-s6-c10

zSpace is currently on the market for around $4,000, a pretty hefty sum. But, for what you’re getting, it could be worth it.

Go to Top