Posts tagged NewsNow Technology

The Monetization Conundrum Of Online Video

0

With the possible exception of the Super Bowl, I’d bet it’s safe to say that nobody likes ads. Whether before a movie or video, in commercial breaks during television programs, or in the middle of your favorite podcast, nobody really enjoys being told to buy this product or use this service (often in a cringe-worth way) while they are enjoying their entertainment. Yet advertisements aren’t going away anytime soon; with the larger and larger audiences their ads are reaching, companies remain willing to allocate precious dollars to get their name out in every way they can.  In the world of Internet publishing, ads have persisted as the staple of a creator’s income, despite significant shifts in the media landscape. But for online video, currently dominated by YouTube. advertisements have been a challenged revenue channel for creators hoping to earn a living.

YouTube-logo-full_color

I love YouTube and have a massive respect for the creators who have made it their full-time occupation to publish videos on the platform. These individuals spent an incredible amount of time and effort to become popular enough just to quit their day jobs and spend their time earning a living via YouTube. The sad part is that making a living on YouTube is harder than one might think. With popular YouTubers like PewDiePie making up to $7 million per year, it might be easy to regard YouTube as an easy path to fame and riches. But really, every YouTuber with even just 5,000 subscribers have put their heart and soul into their videos. As it is, money coming from ads just isn’t enough to allow YouTubers to start making videos full-time until they become very popular, a level which many never reach.

Let’s do the math. The average personal income in the United States is roughly $30,000. The current YouTube ad rate is a $2 CPM ($2 for every thousand views). To earn even the average U.S. income, a YouTuber creating weekly videos (a common schedule) would have to average nearly 300,000 viewers per video, an average usually only met by a YouTuber with around 2 million subscribers. (this varies from channel to channel) Of course, the rate at which you create videos is key in this calculation; if you make a video every day, the average view count drops to a more plausible 40,000. Compare that to the average CPM rate for TV, which is $19 (for an average 22-minute show). With that rate, you would only have to get 30,000 views per weekly video to reach the national average – much more sustainable.

maxresdefault-2

Felix Kjellberg, aka PewDiePie, the most popular YouTuber on the platform. Last years, Felix sparked a small controversy when the public negatively reacted to his $7 million years income.

This isn’t just about YouTubers making more money because their media peers in television and film earn more. I’m not writing this out of pity for the struggling YouTubers who can’t earn a living wage yet are spending all their time trying to grow their audience. The reason the $2 CPM needs to be increased is because it simply isn’t enough to allow YouTubers to grow and make the great content we all want to watch. Take Olga Kay, a YouTuber with around 1 million subscribers across her five channels. In an article written in the New York Times, Olga talked about her hectic work schedule and about how “If we [her friends] were coming to YouTube today, it would be too hard. We couldn’t do it.”

Olga said in the article that she has made $100,000 to $130,000 every year for the last three years, which is a good income; yet she is still constantly stressed about finances, as much of that $100,000 goes straight back into her channels to pay for editors, equipment, etc. Let’s be honest: no one making twenty videos a week, almost three per day, especially with 1 million subscribers, should be that worried about finances.

This is the first part in Fast Forward’s two-part series on the YouTube’s advertisement and monetization conundrum. Stay tuned for the second article in the series over the next few days!

The Almost Impossible Ethical Dilemma Behind Autonomous Cars

0

You’re driving down the road in your Toyota Camry one morning on your way to work. You’ve been driving for 15 years now and pride yourself on the fact that you’ve never had a single accident. And you have to drive a lot, too; every morning you commute an hour up to San Francisco to your office. You pull into a two-lane street lined on both sides with suburban housing, and suddenly realize you took a wrong turn. You quickly look down at your smartphone, which is running Google Maps, to find a new route to the highway. When you look back up, you’re surprised to see a group of 5 people, 3 adults and 2 kids, have unknowingly walked into your path. By the time you or the group notice each other it’s too late to hit the break or for the pedestrians to run out of the way. Your only option to save the 5 people from being injured, or even killed, by your car is to swerve out of the way… right into the path of a woman walking her child in a stroller. You notice all of this in the half a second it takes you to close the distance between you and the group to only 3-4 yards. 

You now have but milliseconds to decide what path to take. What do you do? But more to the point of this article, what would an autonomous car do?

That narrative is a variant of the classic situation known as the Trolley Problem. The Trolley Problem has many variations, some more famous than others, but all of them follow the same general storyline: you must choose between accidentally killing 5 people (e.g., hitting them with your car) or purposefully making an action (e.g., swerving out of the way) that kills one person. This type of situation is obviously one that no one wants to find themselves in, and is so unlikely that most people avoid it their entire life. But in the slim cases where this situation occurs, the split-second decision a human makes will vary from person to person and from situation to situation.

The_trolley_problem.svg

But no matter the outcome of the tragic event, if it does end up happening, the end result will be generally be the fault of a distracted driver. What will happen, though, when this decision is completely in the hands of an algorithm, as it will be when autonomous cars ubiquitously roam the streets years from now. Every new day autonomous cars become more and more something of the present rather than the future, and that leaves many worried. Driving has been ingrained in us for century, and for many, giving that control up to a computer will be frightening. This is despite the fact that in the years that autonomous cars have been on the roads, their safety record has been excellent, with only 14 accidents and no serious injuries. While 14 may seem like a lot, keep in mind that each and every incident was actually the result of human error by another car, many of which were the result of distracted driving.

I’d say that people are more worried about situations like the Trolley Problem, rather than the safety of the car itself, when driving in an autonomous car. Autonomous cars are just motorized vehicles driven by algorithms, or intricate math equations that can be written to make decisions. When an algorithm written to make a car change lanes and parallel park has to make almost ethically impossible decisions, choosing between just letting 5 people die or purposely killing 1 person, we can’t really predict what it would do. That’s why autonomous car makers can’t just let this problem go, and have to delve into the realm of philosophy and make an ethics setting in their algorithms.

15104006386_1bf6bfe96a_b-1

A Google Car, the veichle that very well may be roaming the streets in the coming years.

 This won’t be an easy task, and will require everyone, from the car makers to the customers, thinking about what split-second decision they would make, so they can then program the cars to do the same. This ethics setting would have to work in all situations; for instance, what would it do if instead of 5 people versus one person, it was a small child versus hitting an oncoming car? One suggested solution would be to have adjustable ethics setting, where the customer gets to choose whether they would put their own life over a child’s, or to kill one person over letting 5 people die, etc. This would redirect the blame back to the consumer, giving him or her control over such ethical choices. Still, that kind of a decision, which very well could determine fate of you and some random strangers, is one that nobody wants to make. I certainly couldn’t get out of bed and drive to work knowing that a decision I made could kill someone, and I’d bet I’m not alone on that one. In fact, people may even avoid purchasing an autonomous car with an adjustable ethics setting just because they don’t want to make that decision or live with the consequences.

So what do we do? Nobody seems to want to make kind of decisions, even though it is absolutely necessary. Jean-Francois Bonnefon, at the Toulouse School of Economics in France, and his colleagues conducted a study that may help us all with coming up with an acceptable ethics setting. Bonnefon’s logic was that people will be most happy with driving a car that has an ethics setting close to what they believed is a good setting, so he tried to gauge public opinion. By asking several hundred workers at Amazon’s Mechanical Turks artificial intelligence lab a series of questions regarding the Trolley Problem and autonomous cars, he came up with a general public opinion of the dilemma: minimize losses. In all circumstances, choose the option in which the least amount of people are injured or killed; a sort of utilitarian autonomous car, as Bunnefon describes it. But, with continued questioning, Bunnefon came to this conclusion:

“[Participants] were not as confident that autonomous vehicles would be programmed that way in reality—and for a good reason: they actually wished others to cruise in utilitarian autonomous vehicles, more than they wanted to buy utilitarian autonomous vehicles themselves.”  

Essentially, people would like other people to drive these utilitarian cars, but less enthusiastic about driving one themselves. Logically, this is a sensible conclusion. We all know that we should make the right decision and sacrifice your life over that of someone younger, like a child, or a group of 3 or 4 people, but when it comes down to it only the bravest among us are willing to do so. While these scenarios are far and few between, the decisions made by the algorithm in that sliver or a second could be the difference between the death of an unlucky passenger or an even more unlucky passerby. This “ethics setting” dilemma is a problem that can’t just be delegated to the engineers at Tesla or Google or BMW; it has to be one that we all think about, and make a collective decision for that will hopefully make the future of transportation a little more morally bearable.

iPad Pro – What It Is And Who It’s For

0

Earlier this month at Apple’s annual product event, a new device was released that caught some by surprise. Historically, Apple isn’t a company that’s known for experimenting with different product lines, sizes, colors, or software designs. In recent years, Apple has started to branch out from their traditional iPhone and iPad lines, and brought the iPhone 5C, the line of iPhone Pluses, the iPad Minis, Apple TVs, and more. Clearly, they are trying to produce more options for their customers to choose from when buying one of their phones or tablets, which if anything benefits the customer more than Apple itself. But at the September 9th conference, Apple announced a product that baffled techies and average consumers alike: the iPad Pro.

Like the positioning of the Macbook Pro and the Mac Pro, the iPad Pro is essentially just a higher performance iPad. The specs for the device are promising; most importantly, the resolution is better than the high-end Macbooks, at 264 pixels per inch, even beating out it’s newfound competitor, the Surface Pro. It has a 10-hour battery life, which is fairly good for the device’s size, and again beats out the Surface at 9 hours. But of course, the one spec that surprised everyone was the size: the iPad Pro has an insane 12.9-inch screen.

That’s 3.2 inches bigger than the recently released iPad Air 2, the latest installment in the iPad line. Now, while the product had been rumored for months, far fewer could have expected a super-sized iPad over a year ago, in part because, unlike the upsized iPhones few really saw the need for a giant iPad. The iPad Air 2 is already a pretty good size at 9.7 inches, and adding three inches on to that doesn’t really rectify the $300 price increase. Sure, as Apple has been partly marketing the device, the iPad Pro would be a great device for watching and consuming media, like watching movies, readings articles, and perhaps even playing games. The iPad Pro could easily replace your laptop as your main entertainment consumption device, although personally, I would just spend the extra $200 to get the Macbook Air of the same size, as certain functionalities of the Macs over iPads are important to me and my work. And to be honest, the iPad line is starting to feel a little like this:

That’s not to say that the iPad Pro is a bad addition to the iPad line. If anything, the addition to the product line helps make the iPad line a better fit for consumers or professionals with specific use cases. For instance, one example that everyone came up with simultaneously after the iPad Pro’s release was for artists. The iPad Pro is a great size for a digital art pad, and the excellent display only makes adds to the use case. This hypothesis, that Apple was targeting artists, was only reinforced by their release of new product, an almost parody-esque product, the Apple Pencil.

Apple Pencil is, as you probably guessed, a stylus. Designed to work with the iPad Pro, Apple Pencil is Apple’s attempt at getting into the stylus market, although the Pencil may only work with the iPad Pro. The stylus is actually a very good stylus; it has a very good response time when in use, is pressure sensitive, and overall has a very fluid and smooth feel to it. That’s all and well, and will definitely help out artists when using the iPad Pro, but the reason that this device is so surprising is because of Steve Jobs’ views on the product category. Although Jobs isn’t around to keep Apple going anymore, he did have some opinions that surely shaped the way Apple progressed each year, and this is the first time we’ve seen evidence of Apple disregarding what he thought. Jobs has a strong opinion against styluses, expressing how he thought they were cumbersome and hard to keep track of. “Who want’s a stylus?” he said in 2010 keynote speech. “If you see a stylus, they blew it.”

The Apple Pencil aside, the iPad Pro is an interesting product. No doubt it’s a high-quality device; it has great specs and the big screen just makes it a great content viewing platform. But for 800$? I certainly wouldn’t spend the money, but for people who can (or just want to) it’s a great purchase, as long as they know why they want it. If you’re just looking for an iPad, the iPad Air 2 is a great choice. But the iPad Pro is the kind of niche device that’s great for the people who have a reason to use it, such as artists, but maybe not as profitable for the company, and certainly not the type of device you expect Apple to release. Still, that may show they are trying to branch out into more product types and categories, which very well may lead to some great products in the future.

Why Other Companies Should Follow Alphabet’s Lead

0

For as long as it has existed, the public has identified the Google brand with the ridiculously popular search engine under the same name. Since 1998, Google search has grown exponentially while staying pretty much the same, yet the company Google has expanded aggressively into fields well beyond search, both through acquisitions such as YouTube and Nest, and organically via the company’s extensive R&D initiatives such as Google Glass and the company’s autonomous car. Still, this has all fallen under the good old multi-billion dollar umbrella of Google. That all changed last week. Not that initiatives inside Google have fundamentally changed; the profit driver continues to be search and advertising, with many long-term prospects hoping to flower eventually. Nevertheless, the restructuring of management lines arguably has dramatic long-term implications, in my opinion for the better.

Alphabet_Inc_Logo_2015.svg

Alphabet’s new, clean logo.

In a nutshell, Google essentially created a holding company called Alphabet that owns Google and many smaller companies that Google has acquired/created, such as Nest and Calico (Google’s longevity initiative). Alphabet is now a portfolio of enterprises managed by founders Larry Page and Sergey Brin, many with distinct CEOs who have a fair amount of independence. This shift came as a shock to everyone outside of Google (and likely many inside the company), sounding more like one of the company’s classic April Fool’s jokes than a typical corporate maneuver. Renaming a company with one of the top business brands in the world? Insane in many respects, not to mention handing the CEO of “Google Classic” to another executive, Sundar Pichai.

So why is this a good idea, and why should other companies consider following Google’s lead? It really comes down to what they are trying to accomplish as a company. In the announcement letter that you can read at abc.xyz, they wrote the following, which helps explain their reasoning behind the change:

 “As Sergey and I wrote in the original founders letter 11 years ago, ‘Google is not a conventional company. We do not intend to become one.’ As part of that, we also said that you could expect us to make ‘smaller bets in areas that might seem very speculative or even strange when compared to our current businesses.’ From the start, we’ve always strived to do more, and to do important and meaningful things with the resources we have.”

Alphabet’s initiatives are far-flung and have the potential much more than Google’s traditional “cash cow” businesses to change the world. It’s hard not to see some of Alphabet’s initiatives becoming wildly successful and ultimately spinning out into independent, and large, public companies. For instance, I previously mentioned Calico as one of the companies Alphabet is keeping under it’s wing. Calico is a scientific research and technology company; their ambitious goal is to research and eventually create ways to elongate life and let people live healthier. That is a goal, although ambitious, if reached with the help of Alphabet could very well change the world in a major way.

screen-shot-2014-09-03-at-1-36-26-pm

What really excites me about Alphabet is that they’re doing precisely what I would do with all that money and resources: create and finance projects that will change the future. Google the search engine has become a very conventional business in the Internet age, but Google the company aspires to much more that just rolling in the cash and adding its existing product lines (I hate to say it, but I’m looking at you, Apple). In an age of tech titans, companies such as Google, Facebook, Amazon, Apple, and Microsoft are all angling to stake their claim to the future. And under Alphabet, Google aims to establish a leading innovation platform by “letting many flowers bloom.” Rather than sticking to a couple odd ventures and mainly staying a conventional company, Alphabet lets Google expand into a business set on creating a better future. And if this change in corporate structure will help facilitate that, then I say go for it.

If you want to read the original Alphabet announcement letter, click HERE.

Augmented Vs. Virtual Part 2 – Augmented Reality

0

Reality is very personalized, it is how we perceive the world around us, and it shapes our existence. And while individual experiences vary widely, for as long as humans have existed, the nature of our realities have been broadly relatable from person to person. My reality is, for the most part, at least explainable in terms of your reality. Yet as technology grows better and more widespread, we are coming closer to an era where my reality, at least for a period of time, may be completely inexplicable in the terms of your reality. There are two main ways to do this: virtual reality and augmented reality. In virtual reality, technology immerses you in a different, separate world. My earlier article on VR was the first of this two-part series, and can be found HERE.

Whereas virtual reality aims to totally replace our reality in a new vision, augmented reality does what the name suggests: it augments, changes, or adds on to our current, natural reality. This can be done in a wide variety of ways, the most popular currently being a close-to-eye translucent screen with projected graphics on top of what you are seeing. This screen can take up your whole field of view, or just in the corner of your vision. Usually, the graphics or words displayed on the screen is not completely opaque, since it would then be blocking your view of your real reality. Augmented reality is intrinsically designed to work in tandem with your current reality, while VR dispenses it in favor of a new one.

ViewAR_BUTLERS_Screenshot

An example of a consumer use case for tablet-based AR.

With this more conservative approach, augmented reality (AR) likely has greater near-term potential. For VR, creating a new world to inhabit limits many of your possibilities to the realm of entertainment and education. AR, however, has a practically unlimited range of use cases, from gaming to IT to cooking to, well, pretty much any activity. Augmented reality is not limited to, but for now works best as a portable heads-up display, a display that shows helpful situational information. For instance, there was a demo at Epson’s booth at Augmented World Expo 2015 where you got to experience a driving assistance app for AR. In my opinion, the hardware held back the software in that case, as the small field of view was distracting and the glasses were bulky, but you could tell the idea has some potential. At AWE, industrial use cases as well as consumer use cases were also prominently displayed, which included instructional IT assistance, such as remotely assisted repair (e.g., in a power plant, using remote visuals and audio to help fix a broken part). 

Before I go on, I have to mention one product: Google Glass. No AR article is complete without mentioning the Google product, the first AR product to make a splash in the popular media. Yet not long after Google Glass was released, it started faded out of the public’s eye. Obvious reasons included the high price, the very odd look, and the social novelty: people couldn’t think of ways they would use it. Plus, with the many legal and trust issues that went along with using the device, it often just didn’t seem worth the trouble. Yet rumor has it that Google is working on a new, upgraded version of the device, and it may make a comeback, but in my opinion it’s too socially intrusive and new to gain significant near-term social traction.

Although many new AR headsets are in the works (most importantly Microsoft’s HoloLens), the development pace is lagging VR, which is already to the stage where developers are focused on enhancing current design models, as I discussed in the previous VR article. For AR, the situation is slightly different. Hardware developers still have to figure out how to create a cheap AR headset, but a headset that also has a full field of view, is relatively small, doesn’t obstruct your view when not in use, and other complications like that. In other words, the hardware of AR occasionally interrupts the consumption of AR content, while for VR hardware, the technology is well on its way to overcoming that particular obstacle.

Beyond these near-term obstacles, if we want to get really speculative, there could be a time when VR will surpass AR even in pure utility. This could occur when we are able to create a whole world, or many worlds, to be experienced in VR, and we decide that we like these worlds better. When the immersion becomes advanced enough to pass for reality, that’s when we will abandon AR, or at least over time use it less and less. Science fiction has pondered this idea, and from what I’ve read, most stories go along the lines of people just spending most of their time in the virtual world and sidelining reality. The possibilities are endless in a world made completely from the fabric of our imagination, whereas in our current reality we have a lot of restrictions to what we can do and achieve. Most likely, this will be in a long, long time, so we have nothing to worry about for now.

Altogether, augmented reality and virtual reality both are innovative and exciting technologies and that have tremendous potential to be useful. On one side, AR will be most likely used more than VR in the coming years for practical purposes, since it’s grounded in reality. On the other hand, VR will be mostly used for entertainment, until we hit a situation like what I mentioned above. It’s hard to pit these two technologies against each other, since they both have their pros and cons, and it really just depends on which tech sounds most exciting to you. Nonetheless, both AR and VR are worth lots of attention and hype, as they will both surely change our world forever, for better or worse.

Augmented vs. Virtual Part 1 – Virtual Reality

1

Technologically enhanced vision has been with us for many hundreds of years, with eyeglasses having been in use since at least the 14th century. Without effective sight, living has of course remained possible during this era, but it is a meaningful disadvantage. Now, new technologies are offering the promise to not only make our lives easier, but to also give us new capabilities that we never thought possible.

This idea, enhancing our vision using technology, encompasses a range of technologies, including the two promising arena of augmented reality (AR) and virtual reality (VR). The names are fairly self-explanatory; augmented reality supplements and enhances your visual reality, while virtual reality by contrast creates a whole new reality that you can explore independently of the physical world. Technically, AR hardware generally consists of a pair of glasses, or see-through panes of glass attached to hardware, which runs software that projects translucent content onto the glass in front of you. VR, on the other hand, is almost always a shoebox/goggle-like headset, with two lenses allowing two different screens in front of your eyes to blend into one, using head motion-tracking to make you feel like you are in the virtual world. Both are very cool to experience, as I experienced while attending the Augmented World Expo last week in Silicon Valley, where I have able to demo a host of AR and VR products. This article focuses on my experiences with Virtual Reality gear; next week I will follow-up with thoughts on Augmented Realty.

Virtual Reality

Virtual reality, when combined with well-calibrated head-tracking technology, allows you to be transported into a whole new world. You can turn your head, look around, and the software responds as if this world is actually around you, mimicking real life. This world can be interactive, or it can be a sit-back-and-relax type experience. Both are equally astounding to experience, as the technology is advanced enough so that you can temporarily leave this world and enter whatever world is being shown on your head mounted display (HMD). I wrote about a great use-case of VR at the AWE Expo recently, which involved being suspended horizontally and strapped into a flight-simulation VR game.

Despite what you might think, the optics no longer seem to be a problem, as the engineers at early leaders including Oculus and Gear VR have designed headsets that don’t bother our eyes during use, a problem that plagued early models. That said, complaints persist about vertigo and eye-strain from long periods of use. Even Brendan Iribe, Oculus CEO, got motion sickness from their first Dev kit. Luckily, but his company and others have been making improvements to the software. Personally, I didn’t get sick the least bit while at the conference.

11430095_1635116580036623_7422015937432952630_n

An attendee of the AWE 2015 Expo demoing Mindride’s Airflow.

Uses for VR, among many, tend to fall in one major category thus far: entertainment. Video games are set to be transformed by virtual reality, which promises to bring a new dimension into what could be possible in a gaming experience. First-person shooters and games of that like were already trying to become as real and immersive as possible on a flat screen, but with a 360-degree view around the player, and interactive head-tracking… well, it’s surprising that games like Halo, Destiny and Call Of Duty don’t already have VR adaptations.  And games with a more artistic themes and play will also benefit greatly in using VR rather than 2D screens, as adding the ability to look around and feel like you are in the game will surely spark ideas in many developer’s heads. At E3 2015, which took place this week in Los Angeles, many commented that virtual reality was an obvious trend in gaming this year, and excitement was starting to build about VR’s potential in gaming. While hardly a gaming exclusive environment, VR appears to be a promising tool for immersive military training as well. Nothing prepares a soldier or a pilot better for an on the battlefield or in the air situation better than already pseudo-experiencing it. The possibilities for gaming and military training are endless in terms of VR, and it really is exciting to see what developers are coming up with.

NATTC NAS Pensacola

A VR headset being used for military training.

One thing that may hold VR back is the hardware. Despite having mitigated the vertigo issues, another hardware complaint has been weight.  While the Oculus Dev Kit 2 is a little less than 1 pound, which isn’t much, but can be strenuous when wearing for a long period of time. Still, if we have learned anything from the growth of smartphones it’s that technology marches in one clear direction: smaller, lighter, and faster. And that’s one thing that I believe separates AR and VR: VR is already to the point that the only changes needed to be made will be upgrades to the existing hardware. The pixel density, the graphics speed, the weight, the size. Not to mention that in a few years, many of the major problems with VR will be solved, and this is something that I think separates it from AR.

Whereas all VR has to do is get the hardware right and then integrate head tracking software into their 3D games or movies, AR has a ways to go until has perfected its hardware to the same level as VR has. AR is frankly just harder for the developers. Not only do they have to worry about the pixel density, head-tracking, weight, and size like VR, but they have to worry about depth, the screen transparency, object recognition, 3D mapping, and much more. Currently, there isn’t one big AR player, like Oculus, that small developer teams can use as a platform for their own AR software, and that might also be limiting the growth of the technology. A big player may emerge in the next could years, with candidates including Google Glass and Microsoft’s upcoming AR headset HoloLens leading the race, but for now, AR isn’t really an area where small developing teams can just jump in.

13013107993_9f0de59bcb_o

In the grand scheme of things, AR and VR are at similar stages of development. Within a decade or two, these problems will vanish, and the technologies will be face-to-face, the only thing separating them is their inherent utility in particular situations. For VR, it is a technology that was made for entertainment and gaming. The idea of transporting yourself to another world, especially when the tech is fully developed and you can’t tell the difference between VR and real life, is as exciting as it is terrifying. Still, we can’t help but try to create these amazing games and experiences, as they very well may expand humanity into virtual world we never could have dreamed of. As developers start meddling with the technology, and consumers start buying units, VR will grow into many more markets, but for now, entertainment, gaming, and military training are the main uses. It really is a technology out of the future, and I can’t wait to see what amazing experiences and tools that VR will bring to the world next.

This is the first piece in a two-part series on AR vs. VR. Check back here soon for the second article!

Mindride’s Airflow Can Make You Fly – Well, Virtually

0

Humans can’t fly without technological assistance, but that hasn’t stopped us from building planes, helicopters, wingsuits, and more. Flying shows up in mediums ranging from comic books to myths and fairy tales to cultural folklore. From Icarus to Superman, humans have desired to fly. But as technology has advanced, watching people fly hasn’t satisfied us; now we want to feel like we truly are flying, and in this respect technology is beginning to grant our wish, through Virtual Reality devices.

This morning, at the Augmented World Expo in Santa Clara, California, I got the opportunity to fly. In a unique booth at the Expo, a company called Mindride offered an experience, Airflow, that involved strapping myself into a harness, donning headphones and an Oculus Rift, and then flying Superman-style through a virtual Alps-like landscape. How could I say no?  And so, after 5 minutes of harnessing and calibration, I was flung into this mountainous world, floating thousands of feet above the “ground.” Under me were mountains, some snow-capped, others green. Around me, randomly scattered in the sky, were big pink spheres. The objective of this experience was to steer yourself towards these spheres, trying not to flinch as you run right into them, and pop as many as possible. I have to say, I think I did pretty well, but the larger point is that current generation VR technology is enabling experiences that really can begin to replicate those that humans have dreamt of for centuries. 

The booth was set up pretty unusually. With a desk off to the side, the majority of the space was taken up by this “ride”. Consisting of a couple of beams with straps, harnesses, and cords running everywhere, the infrastructure was pretty impressive but not exactly family room-ready. Before you got to experience the flying, you had to put sensors on each arm that track where you are pointing your arm in relation to your body. Once strapped in, I was hanging horizontally, with the computers gauging whether I was holding my arms straight back in boost mode, left arm out to go left and right arm out to go right, or both arms dangling to hover in place. On my head was an Oculus Rift running Airflow’s custom software.  To add effect, there are two fans blowing air in your face, which vary how much air they blow based on your flight speed.

IMG_4461

Overall, the experience was surreal. Once you are strapped in and flying, wind in your face, you easily forget your immediate surroundings, which in my case included a gaggle of tech entrepreneurs demoing their products. The immersion was astounding, andMindride did a great job making the experience more than a run-of-the-mill VR game. Of course, as it is with new technologies, there are clear hints that you aren’t truly flying across in amountain-filled world chasing pink bubbles. The occasional background noise interfered with the experience, as did my tendency to shift focus from the screen-wide image to pixel-level details. But again, as technology advances, these subtle distractions will be minimized; in fact, some solutions to the issues I had were even displayed Expo. As experiences like these gradually become more common in places like malls, theme parks, and even in our own homes, we will start to see a blending of reality, as we’ve always know it, and virtual reality – a reality in which anything is possible. It’s hard to doubt the demand for that.

The James Webb Space Telescope – An Astronomer’s Dream

0

Astronomy is all about looking up at the stars. Trying to figure out how the universe works and where our place as humans on Earth is in that giant universe. Where geneticists and particle physicists work on the smallest scales, astronomers work on the largest physical scales: the firmament. For a long time, the naked eye and then simple telescopes were enough to make productive observations, but, science has reached a point in astronomical development where we need ever better equipment to realize new discoveries. The bigger, more expensive, and technically advanced the telescope, the better.  And sending it into space is even better, to render the best images and readings.

That seems like a big ask, and it is. The Hubble Space Telescope made its way into popular culture history as the first scientific telescope the public actually knew and cared about. Well, in 2018, a new telescope will be launched that is even greater than the legendary Hubble. It’s called the James Webb Space Telescope (JWST), and it’s pretty much an astronomer’s dream.

“Why an astronomer’s dream?”, you may be asking. The answer is fairly straightforward: the JWST is a gigantic, high-tech, multi-purpose instrument. To put it in perspective, the Hubble telescope had a mirror, essential in capturing astronomical images, of 8 feet across. The 8-foot mirror produced images like this:

4e59c2c85eea338fc9cb9eff9712a7d1_large

Now consider the JWST. It is planned to have a mirror of a whopping 21 feet and 4 inches in diameter. Made up of 16 smaller, hexagonal mirrors, the incredible size of the James Webb Space Telescope is only one of the many parts of the telescope that is making astronomy nerds all over the world very excited. Being is a multi-purpose telescope, the JWST has much to offer scientists. Below I describe only a handful of JWST’s most prominent features, abilities, and facts:

Infrared Radiation Detection

The James Webb Space Telescope detects infrared wavelengths of light, rather than visible spectrum. If you’re not an astronomy nerd, you may wonder why this difference is significant. Well, infrared is close enough to the visible range so that telescopes can use the light to create an image our eyes can understand, but, it is far enough outside of our visible range to distort the colors, and also have some key unique qualities. For instance, unlike visible light, infrared light isn’t impeded by interstellar dust and gas. This means that the JWST will have largely unobstructed views of what were previously clouded interstellar nurseries; where stars form. Hubble couldn’t peer effectively into these nurseries due to their surrounding gas and dust, but the JWST can. This will give astronomers a look into the formation of stars, which is still shrouded in mystery.

Not only that, but infrared radiation emanates from cooler objects: you have to be as hot as fire to give off significant amounts of visible light, and the Earth is obviously not, but everything from a tree to you emits infrared light, which is precisely how night vision goggles function. More importantly planets emit infrared radiation, but stars don’t as they are too hot and radiate visible (and shorter wavelength) light. That means that, for the first time, we may be able to take photos of exoplanets themselves. Before, with Hubble, stars far outshone even the biggest of planets, by factors of 100 -1000 times.  Since suns emit much less infrared radiation, we will be able to focus on the planets themselves, and may even get to take images of the first planets outside our own solar system. Pretty exciting, even for non-astronomy nerds.

Lagrange Point

So, where will this telescope be orbiting?  Technically, it’s orbiting the sun, but the JWST will reside at a Lagrange Point in our solar system, which is a very cool astrophysical place where, and this is an oversimplification, the gravity of the sun and the earth balance out so that could be thought of as not orbiting anything at all, but rather just floating still in space. The gravity of our planet and the sun are in balance at Lagrange points, enabling the telescope to have a perfect, unmoving view of the stars. There are three such Lagrange Points on the Earth-Sun axis: L1, directly between the Sun and Earth; L2, on the other side of the Earth away from the sun; and L3, on the other side of the sun entirely. JWST will reside at L2.

This, of course, has upsides and downsides. First of all, being in the Lagrange Point means that it is more than 1 million miles away from Earth, i.e., we will have no way of fixing it if anything happens to it. And, as Hank Green reminds us in the video above, we have had to fix the Hubble a bunch of times, and that just won’t be possible with the JWST. Basically, we better get it right the first time. Also, the JWTS is so massive that it can’t fit into a rocket fully-assembled, so NASA engineers have had to design a complex unfolding system that could go wrong at any moment.

 

It Can See 13.4 Billion Years Into The Past

Yup, you read that right. Hubble could look far into the past, but not nearly as far  as the JWST. Given the time light takes to reach our Earthbound eyes, were always seeing the universe as it existed in the past.  As a result, the farther away you focus your telescope, the closer to the Big Bang you are able to see. Whereas the Hubble Ultra-Deep Fiild could look 7-10 billion years into the past, the James Webb Space Telescope, with its much larger mirror, can peer fully 13.6 billion years into the past, almost reaching the point of “first light.” First light was the time after the Big Bang when the universe cooled to a point where the very first galaxies could form and the energies begin to radiate light: the “First Light”. With the JWST, we are literally seeing all the way back in time to the beginning of the universe. There’s no doubt that this will allow astronomers and cosmologists to answer many previously unanswerable questions about how the universe formed. If this doesn’t make you excited for the launch of the telescope in 2018, nothing will.

8518326611_4ae5d3fb25_o

A full-scale model of the JWST made in Dublin

Now that you’ve heard all that, can you possibly not be counting down the days to the launch three years from now? To recap: the JWST can take pictures of planets outside our solar system, see stars being born, and see the first galaxies in the entire universe being born. It sounds like something out of a science fiction book, but it’s not. NASA expects to spend $8.7 billion on this telescope, which is a lot, but in my opinion, the investment is far better than spending that amount for a popular instant messaging app, as Facebook recently did. The James Webb Space Telescope is truly an astronomer’s dream, and I can’t wait to see what discoveries are made because of it.

 

 

 

 

The Void Brings Virtual Reality To Life

0

No doubt, virtual reality is a revolutionary technology in the field of gaming, military training, and more. Vision makes up so much of our reality that when it is altered or augmented, we can feel like we are in a totally different world, even if things are occurring in that world that we know can’t really happen (e.g., aliens attacking, cars flying, or dragons breathing fire). This is the power and potential of virtual reality products currently under development such as Project Morpheus and Oculus Rift; they can make you believe that you are living in a fantasy world.

The new post-beta Oculus headsets appear to represent a big step forward in immersive gaming, despite lacking one major ingredient in fully immersive gaming: the actual feeling of running away from an enemy, picking up objects, and jumping over a pit of lava. Although the experience is pretty good only augmenting your vision, that last step toward achieving something I would consider fully immersive is building a gaming system that made you physically feel like you are in the game. And that is exactly what Ken Bretschneider, founder of The Void, is trying to achieve.

“I wanted to jump out of my chair and go run around,” Bretschneider said. “I wanted to be in there, but I felt like I was separated from that world just sitting down playing a game. So I often would stand up and then I couldn’t do anything.” – Ken Bretschneider

Although still conceptual, the Void’s product is aiming to take virtual reality gaming to the next level. The idea behind the company is pretty simple: they will create “Void Entertainment Centers” that will use high-tech virtual reality technology, along with real, physical environments to create the ultimate VR experience. Sounds awesome.

The execution of this idea, however, is very complicated.

There are many technical obstacles to creating a fully immersive VR experience. First of all, you need state-of-the-art tracking systems, not only because it will make the experience more realistic, but also since you don’t want the players running into walls (or each other) because the VR headset lagged slightly or didn’t depict the object in the first place.

Also, the VR headset itself better be up to par, otherwise the whole experience itself isn’t worth it. According to The Void’s website, their “Rapture HMD” (head-mounted display) is as good as if not better than other VR headsets such as the recently announced Oculus Rift set for release later this year. With a screen resolution of 1080p for each eye, head-tracking sensors that are accurate to sub-millimeter precision, a mic for in-game communication, and high-quality built-in THX microphones, the Rapture HMD isn’t lacking in impressive specs. Whether it is ultimately good enough to feign reality, though, is a question that will only be answered when the headsets go into production and become part of The Void’s immersive experience.

The Void gear includes not only the HMD but also a set of special tracking gloves — to make your in-game hands as real as possible — and a high-tech vest to provide haptic feedback in response to virtual stimuli. But the technology alone does not suffice, as that is replicable outside of The Void. What would make the experience unique is the physical environment around the players and built into The Void’s Entertainment Centers. In each game center, the first of which is planned to be built in Pleasant Grove, Utah, there will be an array of different stages prepared for players to experience a variety of virtual games. In every “Game Pod”, there are features that make playing there more immersive, such as objects you can pick up and use during the game, elevation change in the platforms, and even technologies that simulate temperature changes, moisture, air pressure, vibrations, smells, and more. All of these mental stimuli outside of the game will be designed to trick your brain into thinking it’s in the game, and that’s pretty much exactly the experience The Void is trying to provide.

the-void-trying-to-make-vr-theme-parks-a-reality-7

Overall, The Void is a big step towards a new age in gaming. For as long as gaming has been around, the actual stimuli coming from the game has been purely vision and hearing based; now, incorporating real objects, physical surroundings, and the environment-based technologies mentioned above, we are nearing a complete immersive experience (a la Star Trek’s holodeck). Science fiction writers have long pondered virtual systems that realistically simulate other worlds, and The Void is potentially one step closer to that ideal.  Whether or not we are heading rapidly in that sci-fi direction, for now the Void’s Entertainment Centers would certainly be a lot of fun.

 

SparcIt Aims to Quantify Your Creativity

0

Note: The team at SparcIt have been kind enough to create for FFtech a special link to a demo of their product. Please try it out for yourself here: http://demo.mysparcit.com/g/FastForwardTech 

Creativity is something that, for as long as humans have put a name to it, we have struggled to define. Creativity is just something that was within everyone to varying degrees, and some were better at displaying and exercising it than others. While I’m sure most people agree that creativity is a key ingredient to success, it isn’t explicitly used as a factor for, say, getting into college or getting a job, despite it’s relevance. Tests such as the SAT and GRE are used to measure future academic success, even though this type of testing smarts may be only a small portion of what makes someone successful. (Besides, these tests are highly debatable indicators of smarts in the first place.)

Unfortunately, creativity is one of the hardest attributes to quantify, while SAT-style skills are easily calibrated and ranked, hence there aren’t any popular tests that do it. As of yet, an easy to grade creativity test is a landmark that hasn’t yet been achieved. This is the niche the startup SparcIt wants to fill.

sparcit_logo_1

SparcIt aims to create a series of short, fun online games that aim to accurately quantify your creative ability. You may be suspicious of exactly how SparcIt measures creativity, and that’s reasonable. Creativity is viewed as an intangible, part of who you are. But, even if there is no true way to quantify creativity, SparcIt are creating a way to get a pretty good idea of your relative creative abilities.

The games that SparcIt have you complete aim to test different creative abilities. For instance, LoopIt has you come up with all the possible uses for a certain object. MapIt has you create a word tree, branching off of one starting word into as many related words as you possibly can. ImproveIt has you come up with as many improvements for a certain object as you can, such as a trash can; how could you make it better? The full SparcIt test, which you can take a demo of HERE, can take anywhere from 10 to 25 minutes, depending on how thorough you want to be. In the end, the whole test will end up with giving you 4 “ratings”: fluency, flexibility, originality, and your combined score, the Creativity Quotient. Here is what each of these ratings mean:

Screen Shot 2015-05-06 at 3.01.41 PM

A sample SparcIt scorecard.

Flexibility

Flexibility is an interesting statistic. Basically, the goal of Flexibility is to rate how well you are open to creating totally new ideas – how many different categories your responses are in, as SparcIt defines it.

Flexibility is similar to fluency in that we are working with ideas, but while fluency is about generating as many ideas as possible, Flexibility is about generating ideas that are different from each other. Thinking flexibly is a valuable skill because it allows you to get out of a “thinking rut” in order to come up with a whole new idea.”

Fluency

Fluency, although partially incorporated above, is the ability to create a wealth of ideas. Coming up with a multitude of ideas is always helpful, as increasing the sheer volume of ideas in general will increase your chance of coming up with a good one. Of course, these ideas have to be at least on topic, otherwise you’re just spouting useless nonsense. So, SparcIt defines Fluency as your number of unique and relevant responses; how good at brainstorming you are.

Originality

To put it in technical terms, Originality according to SparcIt is the statistical infrequency of your responses. How well you can think out of the box, a very important skill if you want to create something completely novel.

Creativity Quotient

It’s difficult to do full justice to SparcIt’s system, which as they say is a “Psychometric automated engine designed to measure creative thinking ability.” In essence though, the SparcIt system takes these metrics and combines them into your total Creativity Quotient. The Creativity Quotient is a number out of a possible 1600 (sound familiar?), acting as a standardized test proxy. Think of your Creativity Quotient as a general assessment of how well you can use your creative skill to generate good ideas that may be useful in your education, your career, and your life in general.

No doubt, creativity is something we need to take into account more when applying for things like colleges, jobs, internships, scholarships, and more. The only real way to measure two people’s relative create ability is to quantify it, which is something nobody besides SparcIt have been able to achieve, or even attempt. Of course, SparcIt is still in the early stages of its development, and we can look for this technology to develop further as the product gains wider use.

Sparcit-Featured-Image-II

A sample of some SparcIt games to test your creativity.

Again, to try SparcIt out for yourself, you can go HERE for a free demo.

Go to Top