This is my course page for the CS491 Virtual and Augmented Reality at UIC. I'll constantly update the page with articles and projects done during the semester.
During this first week of classes, I got to know many different applications of Virtual Reality and Augmented Reality, and some of them were really impressive and astonishing, this is a review with my impressions on them, pros, cons, applications and what I think could be improved.
My experience started with something unique: CAVE2, state of the art Virtual Reality technology. The possible applications of this huge platform are so many, and the best thing about it is that you just need a pair of cheap glasses and you are all set, just walk in and find yourself immersed in a 3D virtual world, also, the fact that you can still see the real environment around you allows the possibility of interacting with other people and real objects. Unfortunately, my pessimistic attitude shows me also the disadvantages of such a platform: it is not portable nor affordable, as much as I would love to have something like this in my house to be entertained for the rest of my life, I guess it can only remain a dream, and I will just be happy to stick as long as I can around it while studying at UIC.
Since I started by talking about the thing I liked the most, I might as well continue in decreasing order, and without surprise the next device is HTC Vive, a more affordable and portable device without doubts. What I really liked about this device with respect to the other virtual reality glasses, is the resolution, the wide angles of view offered and the fact that it’s really comfortable. The other glasses like Google CardBoard have the very big disadvantage that they use your somewhat limited phone screen, leading to bad performances, narrow view angles, and bad resolution on most devices, moreover, they are not very comfortable to put on if wear your own glasses too, and when I tried them I couldn’t keep them on for more than two minutes because I was starting to feel dizzy. This is why I don’t enjoy those cheap devices, and even if the fact that they are accessible to everybody is a pro, I would rather not us VR at all if these were the only possibilities, I also think that one of the reasons why Virtual Reality hasn’t really taken root yet is because of this, people don’t really enjoy using it. Luckily enough there is HTC Vive, whose only big disadvantage, I’m afraid, is that it could be addictive and make social life even worse than it already is nowadays, but if you are a computer science student with already a poor social life and you like to travel but have not time nor money, Google Earth VR could be a good alternative to satisfy your dream to visit the world.
Switching subject and talking about Augmented Reality, the best device I tried is HoloLens by Microsoft, what I liked the most about it is how stable the virtual objects are in the real world, it almost looks like they are really there, the possible applications are countless, but I mostly see them in the fields of education, learning and training, rather than having ludic purposes. What I did not like and think that could be improved is the small dimensions in which you see the virtual objects, the fact that if you tilt the head too much the object might be partially go out of the screen, leading to an immediate understanding of its fakeness, as well as the impossibility to keep your own glasses on while using it. If I had to redesign the visor I would do it with a shape similar to a helmet or a mask like HTC Vive.
The last type of Augmented Reality devices are basic Android applications, like Quiver or Spacecraft3D, they allow to experience with Augmented Reality in an easy and fun way, in particular, their main advantages are that they are free apps and extremely portable, as well as very good at recognizing and tracking objects, in this case simple papers, onto which they have to project the virtual elements. Even if they are not as thrilling as the other devices they could still have ludic-purposes or training applications, especially for kids, and the only drawback of these kind of devices is that you’re very limited in your movements and you always have to keep your phone pointing at the right direction or the virtual object would immediately disappear.
In conclusion, I think these somewhat new technology fields have great potentials, what is missing is large-scale productions and usage that would lead to the development of new applications and consequently an even greater use of them. Moreover, it’s very crucial that these devices are comfortable and don’t create bad sensations in order not to make a first bad impression, otherwise a user would just stop using them straight away.
As a first test of Augmented Reality I created this small space station which matches the color of my bed sheets in a way that it becomes difficult to distinguish what is real from what is not. Hopefully, when the astronauts have finished working they will also make my bed.
The amazingly simple user interface of Unity and the perfect integration with Vuforia allowed me to create a static Augmented Reality application without writing any line of code, I think this technology is very powerful and with a bit of effort the possibilities are infinite.
Photo of the augmented reality scene
Fisheye shot of interaction with the virtual world
As technology advances, we are beginning to integrate more and more virtual content to our real world, and although Augmented Reality has not really caught on yet, it is very likely that in the near future most of the people will see through augmenting lenses.
A first stage could be a world in which people use their smartphones to look up objects’ information, this could be really useful for some applications. Imagine you’re going to the grocery store and you start wondering how a certain food will look like when it’s cooked, in this hypothetical future you would just need to scan the tag of the item and a steamy virtually cooked food will appear in front of you. It would also be interesting to implement the possibility to bring into the camera scope more than one item and see the possible recipes combining various elements, maybe also providing insights such as a health level, quantity of proteins, carbohydrates, fats, calories, or suggesting other types of food to buy with it in a personalized way based on dietician’s recommendation and personal preferences.
This first usage of Augmented Reality can be really useful and fun, the only thing that does not really convince me is the little usability, and the fact that we are still seeing something in a small 2D screen that we have to take out every time our reality is too poor of information. It’s not really comfortable and most of the people would end up renouncing the opportunity.
A deeper level of everyday Augmented Reality would be reached in case we invented contact lenses or non-invasive glasses that people could wear during the entire day. I believe that if this type of technology was available, the entire humanity would start using it, the rich and comfortable usability would make it simply irresistible and eventually everybody would live in this Mixed reality.
This possibility can seem a little bit scary though. Even if most of the applications can be useful, living in a world like this could be harmful for our brain, some applications could cause addiction and social interaction could potentially decrease a lot, as more and more of our daily interactions will be with virtual people and objects, created to be perfect and fitting our needs and desires. Another important issue would probably be privacy-based: wearable devices that could record what a human sees and look up people information by scanning them is for sure something that no one would be okay with. A must have on those type of devices would be the possibility to switch on and off the augmentation, so that we would not be completely immersed in the mixed world.
This was the first time I used this type of technology, my first impression on the Google Augmented Reality live translation is that it is very powerful and a very clever application of Augmented Reality. Unfortunately, the text recognition software is still very far from being perfect, and this is the main reason why this kind of programs are not much used yet. To try and see a possible use case for this type of technology I tried to put myself in the point of view of a person who is alone in a foreign country and does not know the local language, in this scenario a possible important usage of the live translation would be that of trying to find the right medicine among a set of medicines with unknown names and instructions. This could be important for someone who is not feeling very well, because he may want to take his medicine as soon as possible and he wouldn’t even be able to try and type all the words on a simple translator because of the pain he’s feeling, or, in an extremely unlucky case, he actually would not be able to type the words written on the medicine because the alphabet is unknown to him.
For this worst-case scenario I was able to find a Chinese medicine, which I would never be able to translate without an OCR program that recognizes the text. Unfortunately, as you can see in the image below, the results are not that great, when moving the camera, the translation keeps changing and most of the things make no sense at all. The important thing, though, is that I was actually able to understand what the medicine is for, in the upper central image the translation says: “Temporary soothing due to the following induced muscle and joint pain”. Even though it wasn’t able to translate the following words, at least I know the general usage of the medicine. A fun bug to notice from the bottom central image is that the stripes of the tiger and the abstract tree image were recognized as Chinese ideograms and translated into “long” and “epilepsy”. Live translation of Chinese medicine, on the left there is the real photo, on the right 4 different translations
From this second picture, which is the translation of an Italian medicine brochure, we can start to think about the consequences of taking a medicine without understanding which could be the undesired side effects, this could be very dangerous, which is way we have to be very careful when we think the translation is not being performed in a good way and we are in a rush.
Live translation of Italian medicine brochure, on the left there is the real photo, translation is on the right
Another little bug to notice from the third image is that even if the translation from Italian to English is selected, all the other languages are tried to be translated by the app, causing confusion and many errors. I think this could be solved by detecting the language of each line and performing the translation only if the starting language is the correct one.
Live translation of multi-language medicine, on the left there is the real photo, translation is on the right
Talking about a future with AR lenses of glasses, instead, supposing that the technology will be much better with respect to the one we are using nowadays, I think this type of application of Augmented Reality will be life-changing for many people. All the people that would like to travel or move in another country but are scared of not understanding anything in that country will be able to go there on their own and understand signs, product labels, instructions, and everything. It would also be nice to have movie theaters with subtitles that every person sees in his language thanks to the automatic real-time translation.
The biggest drawback I can think of is the one of losing the ability to learn other languages, since we won’t need to learn any other language other than our native one. This is something that is already happening for instance in the US, it’s very rare to find an American who speaks anything else other than English, this is due to the fact that they don’t really need it, because everybody more or less can communicate in English.
Lastly, as any other Augmented Reality application, I think that the user should always be given the possibility to switch on and off such functionalities provided by their devices. It wouldn’t make sense and it wouldn’t be a good thing to impose this type of thing on people.
I’ve always been fascinated by the universe, as I believe most of the humans throughout history have been. It’s always there, above us and around us, infinite and unexplored, and yet the only way we can actually interact with it or see it is through telescopes. I’ve never had access to a good telescope and the only way I had to get closer to the universe was using a 300mm tele lens for my camera, with it I was able to get pretty good pictures of the Moon with all its craters, but apart from that, I always forget to look up at the sky both because by living in big cities the artificial lights make it difficult to see celestial objects and because I’m short-sighted and this makes it even more difficult to see them.
I find this idea of Augmenting the sky with virtual objects which are actually real objects that we’re not always able to see brilliant, but, as with the other applications, I think that this one too still has big limitations. I remember I already tried one of these applications some time ago, and wasn’t amazed about it, this is mostly because of the lack of technological advance. This time I downloaded and tried the SkyView app on a Samsung Galaxy S8, and again I was not satisfied by the results. As you can see in the below images, especially in the one with the Moon, in a few seconds it looked like the Moon and Mars completely changed the position, moreover, I was actually able to see the real Moon and it wasn’t in any of the two locations suggested by the app, this made me wonder about the reliability of such an application.
Saturn in the SkyView application, UIC-Halsted station, Chicago What I think it’s good and accurate though, is the relative position of the various virtual celestial corps as they are displayed to the user’s screen. This could be an interesting fact to exploit, in fact, supposing the user can see the Moon, he could try to align it with the virtual one, and then obtain more or less a good picture of the other stars’ positions, or just try to locate stars and constellations based on the relative position with respect to the moon, but in this case a simple 2D map of the sky would serve the same purpose.
Thinking at the future, such an application that allows to locate, identify and better visualize objects of the real world could be an interesting feature that a pair of augmenting glasses could have. Focusing on the sky the possibilities aren’t that many, but a first idea that comes to my mind is the possibility to integrate weather forecasts by looking at the sky, we could be able to see a storm coming towards us from a certain direction with a specific speed, approaching us at a specified time.
Another cool thing that could pop up in the sky is the location of our friends and contacts, or events. Instead of overlay information on our usual field of view parallel to the ground we could display such secondary information at the sky level, by prioritizing layers in this way we won’t not be overloaded by too much information. Each of the secondary information could be promoted to be on a higher priority layer, and thus displayed in front of us, based on personalized choices and distance to the location or distance in time.
The important thing to always keep in mind when developing this type of applications is giving enough freedom to the users, so that they won’t feel trapped into the augmented world. This could be achieved by giving them much control over the system, letting them set their preferences, level of augmentation, layers to be displayed. The first step we have to make is always that of advancing technologically, we are still far from having devices that would able to support this type of applications, both in terms of sensors and in term of comfort. I doubt that anybody would want to wear such a device if it does not weight like normal glasses, or if they have to charge it every 2 hours because of the high battery consumption that an always on display which constantly uses sensors and computes real-time 3D graphics objects would require. Even if we can begin to think about these applications, that future is still pretty far, but we will make sure it will arrive one day.
The first project I chose from one of the students in my class that I chose to test and describe is that of Edward Hughes, group number 29, whose project’s documentation can be found here, I’m writing this article before seeing his presentation which will be a week from now. I chose this project because reading the documentation, Edward stated that he had to edit the standard Vuforia classes, which is something I did too, wishing there were other ways like interfaces to implement or classes to extend and define our own behavior of things, because this is normally what we should do when coding with an object-oriented language. The other reasons why I chose this is that I looked at Edward portfolio and I saw he already had experience with Unity and C#, which were unknown to me up to a month ago, and I thought it would be interesting to see how he implemented come things and I could learn a lot from him.
As expected, as soon as I opened his project I saw a very good structure on the directories containing his assets. It’s easy to find what you’re looking for, and as I imagined, there were components that I didn’t know how to use, like components that give physics to the objects, for instance the air balloons animals from the drink1, by adding physics behaviors like gravity and constraining them, he made them look like real balloons, whereas they were only simple 3D models before. I found this particularly cool.
Can with animal balloons and cereal box
In general, from his project I liked the minimalism and order in all the scenes, this is something that very few projects had. The particle objects like the fire crackling on the placemat or the water bubbling from the drink scene were very modest and minimal, this is really peaceful to watch and I prefer this kind of approach with respect to scenes where you cannot really enjoy because of the giant particle systems. Most of the other projects didn’t have the possibility to put many objects on the placemat because their scenes were so big a full of models that even just putting two targets close to each other would cause a massive overlapping of models and a big confusion. I also really like the complete change of theme on the placemat triggered the entering of a target in it.
Minimalistic style and change of theme on the placemat
What I liked from the magazine are in particular the effect of really pressing a button, when your hand is on the button, it gets inside like you pressed it, this effect make the usage of AR buttons more realistic and is a great idea, I also liked the 3D titles and objects appearing in every page, and the custom “random” movement of the sheep on the last page, constrained to be inside the page. If I were to give him some advice on how to improve the magazine, personally, I would like to see a smoother transition when swiping among pages for instance by adding some kind of animation of the page swipe, giving to it the sense of being a real magazine.
From the cereal box 1 I loved the little light close to the goblin, which really emits light, overall the scene is a bit too dark perhaps, and it’s difficult to see but it looks very realistic. Moreover, I discovered from the goblin in the scene, how to use pre-built animations from objects imported from the asset store, I didn’t know that and I tried to create complex animations on my own without exploiting this possibility that I didn’t know existed, and it was really painful, so I thank Edward for that, I’ll make use of this knowledge for my future projects.
The only thing that I thought it was somewhat missing from his project is the breakfast background, I noticed in most of the projects that they all overlooked the breakfast theme, and I don’t know if I got it wrong but given the title of the project: “Eat It”, the cereals and can, the placemat, I really thought we had to center the project around a breakfast, and maybe integrating it with one or more themes to populate our scenes.
The overall project is great, very well done, I also downloaded and installed the application provided for Android and it works perfectly, smooth enough and not really heavy, I appreciate the low-poly models like the bird and dragon balloons that allow the possibility of a real usage of the application on a mobile device.
The second project that I chose to test was that of Mayank K Rastogi, group number 3, whose project’s documentation can be found here. I was trying to find a project between the ones for which I attended the presentation but it was really difficult, some of them did not have a good documentation, some other a really bad video, I even downloaded a 4 GB project who didn’t even worked when I imported it in Unity, so I decided to look at the documentations of the projects that I had not seen before and I found this. I chose this project because I found the documentation really well done, and I already knew that this would work, I loved the structure of the project and the only thing I would have changed are all the sounds and lights icons in the scene, it makes impossible to look at it and navigate in it.
The main scene around the placemat is good to look at, I like the picnic kind of breakfast, the circle shape and especially the outdoor feeling that the clouds on top create. What I think was a bit wrong for this scene is the proportions, it’s way too big and it was very difficult for me to even notice the clouds because they are very high up, and it’s also difficult to capture the whole scene within the webcam field. I really love the fact that the cereal boxes and the can scenes shrink and many models around them disappear when they interact with the placemat, this is something that not in many projects can be found, and this gives the user the possibility to really put the products onto the placemat and still found some sense in the global scene.
All the scenes
The magazine also is cool, I like the idea that it contains recipes and the animated pages transitions are really well done, there is also a page flipping sound that is played when the page is flipped. Also, the feedback of the button being pressed is present and it’s something not many projects had. The only thing that could be improved here is to make the button invisible on the last page, or don’t give the feedback that you can press the button next at the last page because nothing happens in that situation.
My favorite scene is the honey-themed cereals, kind of static except the bee that flies around and stops onto the hive, it’s peaceful and looks realistic. I also really like the interaction between this scene and the milk one, how all the bees get out and try to defend their hive from the external object coming towards it, as you can see in the picture below.
Interaction between scenes
I couldn’t test the custom made target’s scene but I saw it in the video and it’s great that it’s a real type of cereal, I would have liked to see more food around the two cereal boxes like there is on the custom one and on the drink, since when you put them on the placemat they don’t really look cereal boxes and the whole food theme gets a bit lost.
The nutritional facts on top of each box is brilliant, as you can see them all together and compare them. And I also liked how the placemat’s mascotte approves or disapprove any type of box or can you put on the placemat, the sound and animation of it talking is well done.
It’s overall a very nice project, I appreciated taking a look at it, I also downloaded and installed the apk for Android and it works fine and smoothly except some high poly models that make the scene a bit heavy and I guess on most of lower category phones it could be even more laggy.
I want to start by saying that I wished I had known this kind of apps when I moved to Chicago two months ago, when I moved here I already knew my room would be unfurnished and so I brought a measure tape all the way from Italy in order to be able to measure the space and buy appropriate furniture that would fit in it. When I got here I thanked myself to have brought it because the bedroom was really small and I knew that I would have needed to be very precise with measurement to make everything I needed fit in the room. Unfortunately, now it’s too late because I already got everything but I’m sure an app of this kind would have come in handy for this type of task. I’m still missing a desk in my room but I have no space left, so I will try to test the app by placing a very small desk on a corner, and as you can see in the image below the app works pretty well, the computer vision algorithms to triangulate the physical objects are accurate and the computer graphics to draw the virtual object in the scene are perfect, I still don’t know if the size is actually in the right scale and I have no way to know unless I see the desk with my eyes or measure it or actually buy it and put it there and take a picture, but I think it’s pretty accurate.
Desk in my room
I also tried some cool stuff with the app in two different situations, in the first one I was in the really uncomfortable CTA bus on my way to UIC and I noticed a poor child who I thought was really uncomfortable by the way he was sitting, and I decided to put a virtual comfy chair on the bus for him. As you can see he didn’t even notice it and kept sitting like that for the whole ride.
The second situation in which I wished I could make some magical furniture appear in the real world is when I was in class in the room 317 of the Burnham Hall building, that is the only class I have where you don’t even have a proper desk to put your laptop and stuff on, there is only a really small one. I started dreaming that I could have a big desk in the first row to put everything on and an armchair to put feet on and be very comfortable. I think I would really enjoy the lecture in that way, but of course it’s only a dream.
AR furniture in weird places
I think there is still much to be improved for these apps, and they will be used much more in the future when they’ll have more features. A cool thing that could be integrated in the future is the possibility to automatically populate a room and furnish it with objects, some kind of algorithm that looking at the structure of the room and maybe by using personal preferences or by setting a budget, would fill the room with everything the user needs and propose the final price, and maybe with a click you could actually buy everything and in a few hours some robots could be there with the furniture and start mounting it in your room. And at that stage it could probably be the case that it would be possible to also take whatever object and scale it down to be of the perfect size for you space, and machines will think about building the item, or even modify it on the fly, for instance by adding modularized components to it. The best thing would be to easily create your object from scratch with the application and make them build it. It would make our lives so easy, we could just model the real world like we would model a digital one.
Another idea is to have an app that can scan objects from the real world and store a copy of it, if for example you see a sofa in some place and you don’t know whether it would fit in your house, you could just scan it and then go back home and try to place it where you want, it’s sort of an advanced measure that also lets you look at the final space complete with your product, and it could be done with any objects in the world, it’s not like it would be only for 3D models for products created by a company like IKEA in this case. I really hope I will live long enough to see these types of applications of AR.
This article is related to my second project, in particular, it involved experiencing the virtual environment in a much higher scale, in this case, a scale of 10. Scaling everything in the room to be ten times bigger, and putting the VIVE on to experience something unique, a really unreal world which is given credibility from the fact that I know the environment pretty well given the amount of time I spent in there for designing and testing.
The other thing that made the experience really bad was the navigation. Even if I moved around the 3 by 3 meters in the physical space I had, in the virtual world I was almost standing still in the same place. In this way it was impossible to move around and try to pick up objects or interact with them, and the only solution to that was to restart the game changing the virtual position of the play area in order to make myself start in a place in which I could actually interact with some object. And the only way I could make progress in the actual “game” was if I allowed the user to teleport, but even in this case the range of the teleport is really small compared to the room, which means that you have to take a lot of teleporting steps around to navigate where you want, and to climb on top of objects is still impossible even with teleport because they are way to high and out of reach for your small pointers.
I found no technical problem with the textures and the alignment though, this could be because of how the objects and the room was designed, of course the resolution of the textures was worse but still good and it was not something you’d notice at a first glance, this means that doing this type of scaling maybe at a lower absolute value is possible and sometimes it could be useful.
Where this type of transformations would be useful in the designing methods of today could be to have a really flexible type of objects, objects that could be scaled at runtime when testing their design. Imagine a type of company that before creating the manufacturing machines to build a commodity item, actually make real users test it in virtual environments and collect their feedback. In this kind of scenario the advantages of playing with objects scaling would be to have the possibility to let people experience with various sizes of some object without having to put much effort in recreating the whole model, it would be just a matter of changing a parameter or giving the user a virtual slider to make it scale as they like best. This saves a lot of design time for the company, and also allows easy and practical testing, which is useful in order to make a product the right way and of the right size at the first time, which translates in less time and money resources spent on production and a higher revenue, other than a higher customer satisfaction.
A very big chandelier
Scale up bongo and cemetery out of the room
Holoportation is a technology created by Microsoft that allows the tracking and 3D model real-time reconstruction of people. With this technology and a mixed reality HMD like HoloLens, it is possible to actually see and interact with the virtual avatar of the person, which makes it like a teleportation in the form of a hologram, from here the name.
I want to start by saying I was really impressed by this technology. It was the first time I heard about this, but it is actually a really old concept that is often seen in futuristic and sci-fi movies. This is mainly because it’s clearly something human beings need. We’ve always felt the need of being in touch with the people we care about when we’re not close to them, so we used to send letters, then text messages and phone calls, now video calls, and the future will surely involve an augmented reality type of teleportation, similar to what holoportation does.
I think that one of the best things virtual and augmented reality can do is to bring us somewhere else or bring someone else to us, and some kind of interaction with real people is the key to make everything seem more realistic and interesting. Most of the applications use virtual characters to interact with the user just because it is very easy and simple, no need of online connectivity nor real-time data transfer.
The main problem with these kind of applications and the main holoportation success is that of creating avatars that are recognizable by other people, or make them look realistic so that we can feel something when we look at them. Many games like AltSpaceVR failed at this. Avatars are not customizable and most users look the same. Others like Rec Room have a slightly better customization, when users can choose some features of their avatars. Facebook spaces avatars are somewhat recognizable people, but maybe too cartoony. Holoportation does this really well, and it does it in real-time based on the images it captures, which makes it an even more difficult task.
Looking at the demo video, it looks like the avatar is very well done, it’s recognizable and the overall shape is exactly as the person it embodies. Of course the level of details, mostly in the face, are still far from being perfect, but I really think that you would recognize any person that you know in real life.
The main disadvantage is of course the amount of hardware and the internet bandwidth required. A lot of cameras pointing from different angles are required to capture the person from different sides. Of course this kind of hardware is pretty expensive and it is not something most of the people would buy nowadays, just to get a better and more realistic communication with their friends. High internet bandwidth is also required to transfer the huge amount of data representing the 3D model. Another big disadvantage that Microsoft itself noticed and tried to remedy is the one of portability. You can never use this technology outside of your house and while you are travelling for instance. Mainly because it requires a long and sophisticated configuration, but also because of the limited bandwidth you would have. They actually were able to make it work in a car by decreasing a lot the amount of bandwidth needed by using model compressions methods. It is surely a great technology to have, unfortunately it requires a lot of hardware and space where to install it.
If people started to use it now in homes they would need a special room dedicated to holoportation, which is something most of the people don’t have. The target is still very small, maybe this is why I’ve never heard about it before. With time, the hardware and the algorithms will become more sophisticated and maybe only 3 small wireless cameras will be needed, in that case setting up the space would be much quicker, and we can think than more and more people will start to adopt this kind of telecommunication more often. Also, some public places could provide this kind of service around the cities. By paying some money, people will be able to access this technology without requiring the initial setup.
For this week’s homework I had to choose a student’s choice topic and write my own critique of it.
I chose to talk about CoolPaintr VR. CoolPaintr VR is a game developed by Sony for PlayStation VR. Its main feature is that of painting in a 3D space in virtual reality. This application was really fascinating to me, I’ve always been attracted to art and in particular crafting. A few years ago I bought a 3D pen which works in the same way as 3D printer, by using heated plastic that cools down as you draw in the air. This type of applications allows the same thing but in Virtual Reality. The very good thing about it, is that after you’re done with your work you can actually export it as a collada (.dae) file, save it onto a USB and put the model in a software like Unity to be used in a game or application. You can also just put the model back into a more sophisticated edit mode by importing the collada file into 3D modelling software like Blender or Sketchup. From a software like the above mentioned you can also export the model in a format which is supported by a 3D printer and actually be able to give life to your model.
I really like the fact that you can take your time in there, experiment different things in total freedom since you’re not actually using real and expensive materials. This, and the fact that software is precise to the millimeter, really allow you to create a masterpiece, continuously revising and editing, made possible by the dynamic switching between paint and edit modes. This sometimes is not possible in the real life when you often have to start a drawing again from scratch after you messed up too much on the previous one.
Going more deep into the technical implementation of it, it looks really easy to use, you just need a single controller to start creating, there are a lot of brushes, shapes, color picker, filler, and usual tools you usually find in digital painting software. The good thing about having it in software is that there are no limitations on the number of tools it will be possible to add and there are not limitations on the colors you can use.
A really interesting feature is the possibility to import an actual image into the 3D space to use it as a reference and start painting and modelling a 3D model on that. It is usually something done in 3D modelling software like blender, it gets easier to model if you have a 2D picture to follow and to compare to. With this kind of feature it could be very easy also for people who are not really into art to start creating some cool model and maybe improve their art skills with practice.
Another cool possibility is the one of integrating this game with Spotify. Most of the creativeness of people comes out when listening to music, so I think this is a really perfect combo to maximize your creativeness. If you don’t have a spotify you can also upload your own music and start painting while listening to your favorite songs. I think it could also be nice to have the possibility to watch videos like on the tv or to look at text messages and maybe also be able to reply from the application, since you could stay "trapped" into this kind of game for a long time while you try to create something beautiful and you don't want to get out of the virtual space every time you need to check the time or respond to someone.
The main cons of the application are: there is no possibility to erase part of a line, once you create a line either you delete all of it or nothing. I think it would be better to add the functionality to have a normal eraser just as the one we’re used to. In this way it will also be more easy to draw since we don’t have to care about how long a line is going to be and we don’t have to split the drawing in multiple lines in order to avoid having to erase too much of the drawing when we make a mistake. Another thing that I think it could be improved is the collaboration in the creation of your art pieces with other people. It would be really cool if it was possible to play online with a friend, and working on the same object in multiple people, so that everybody can concentrate in what they are better at. Also, in this way it would be easier to learn from other people how to use the application and how to improve your skills.
For this week’s homework I had to choose a student’s choice topic and write my own critique of it.
I chose to talk about Vuzix Blade Smart Glasses. This new type of glasses are the next step in Augmented Reality, providing a very slim and stylish fashion, very similar to normal sunglasses.
During the course we’ve talked, seen and tested many types of different HMD displays or AR glasses. The common reason for all of them not to be widely spread is mainly their external appearance. They’re all pretty invasive and heavy, and no one would ever wear those type of glasses outside in public. This is the first AR set I have seen that could possibly be something people would wear during their entire day, just like any other pair of normal glasses or sunglasses.
These smart glasses appear to have all the functionality of many other AR headsets for a price that ranges in the middle-low category for AR glasses. They are much cheaper than the Hololens but still offer the same type of functionalities and maybe some better features. They have the same kind of interaction with the system as the Hololens, by either using your voice or by pinching within the viewing window in front of you. Additionally, they present the benefit of having touch controls on one side. They can also be connected to your phone and other devices via bluetooth or wifi.
They also present the convenient support of Amazon Alexa and Android, which makes it very easy to pair with a phone and use it as an additional device to use when for instance your hands are too cold to use your phone in the freezing winters of Chicago. This is one of the main reasons why someone would choose to start using a pair of AR glasses, thus it's really good that since from the beginning they started supporting these kind of technologies so that people can already understand the capabilities of those devices and help push these new AR technologies onto the market.
I like how the display window appears to be similar in size to the hololens, so that the notification and virtual things that can appear onto the display will hardly be so big as to give you limitations of what you can see in the real world, so it will possibly be not too distracting. At the same time, the displayed information are really useful, and it's not the case as for other types of applications where you have to be immerse in a virtual environment, continuing to be focused on the real world here is essential.
The possible applications of AR technologies applied to a set of glasses, as we’ve seen during the course, are infinite. There are GPS navigation possibilities, tracking and displaying 3D models onto a target image, tracking restaurants and architecture and display useful information without having to search them yourself on google maps or on the internet. It is also very useful to quickly check notifications and messages/emails even when it would not be appropriate to do so with your phone (e.g. at the cinema, at a conference), the only problem with that would be how to reply to messages, the only way there is for now is to use vocal messages with software that extracts the text.
I think Vuzix Blade is one of the most promising AR glasses that have been developed so far. There are still some issues that have to be addressed about wearing AR glasses in general. In class we discussed how people would not be okay with other people wearing this type of device all the time. The main problems are because of privacy since many glasses are capable of recording audio and video. There will be a range of time when the transition will happen and people will probably start to accept to be around Hardware eyes all the time, but we’re still not there yet. By the time this transition will happen I think that the technology will have advanced so much that even this type of glasses will be considered invasive, and we’ll end up with just using our normal glasses will really small hardware embedded in them. It will be a very interesting reality and I can’t wait to see what kind of benefit this type of devices will bring into our lives, but I’m pretty sure it’s going to be awesome.
mrk23 at hotmail dot it