Sunday 9 June 2013

Week 14 - Final individual Milestone

For the end of our project, the Geriambience team has created and tested a system that is able to track a user through a virtual bathroom using the skeletal tracking function of the Microsoft Kinect. Three example bathrooms have been modeled, replicating real world bathrooms, all being fitted with Caroma products due to their sponsoring of the overall project. The Kinect is able to track a user and represent them in these virtual bathroom environments through orbs relating to their body parts. The interaction with these environments ranges from simple moving systems to smart, gesture based interactions that the user can perform to alter their bathroom.

Group objectives

Our group had three main objectives throughout the course, although one of these was dropped due to time and technology restraints.
1. Model with high detail a series of three bathrooms. Use Caroma products within these bathrooms.
2.Using the code of the Microsoft Kinect, create a series of interactive elements to place into these bathrooms. These elements will use gesture and positional information taken from the Kinect to register the player in the virtual world and interact with it. This is proof of concept work, and has applications in the real world if it were to be taken further.
3. Create a moving mount for the Kinect system so that it can track the user no matter their position in the bathroom. This is the objective that our group had to stop working on. We kept running into problems while working on this section, including the realisation that finishing this system would take up at least the rest of the semester, and the Adruino work we were doing needed us to order a large number of different parts, and would be very unfeasable to actually setup.

Indivudual Milestones
In order to properly meet the group objectives, I needed to set myself a series of personal milestones to meet. These were things that I both wanted to learn, and needed to learn in order to fully acheive our group goals.

1. C++ coding - I have very limited experience with coding, and it has been something I've wanted to learn more about for a long time. The reason I picked this project to work on over the other ones available is because I saw it as a fantastic way to get some practical experience with coding. This urge to learn about it is why it became my major milestone for the semester.
2. Arduino - Linking with my interest in learning the C++ language was my wanting to learn how to use Adruino kits. I've never used anything like them before, and they seemed like a fantastic prototyping and learning tool, so I was very eager to learn how to use them.
3. Project leadership - When we were choosing teams, Laura asked me to take over for her in the role of group leader. I've worked in groups for major projects before as the leader, and found it to be a good experience so I wasn't upset about this. Working within a group is never an easy task, so I wanted to make sure that as group leader I allowed our group to finish the project and deliver what we set out to do.
4. Presentation of work - After the first milestone submission, I realised that my presentation of my work was severely lacking, so I wanted to then put a lot more effort into presenting the work I had been doing. This also translated over into the group wiki, which I then started making sure all group members were looking after and uploading to.

My contributions
Kinect Interactivity:
I have only ever used a Kinect in a developmental sense briefly once, and it was running through Grasshopper and Rhino. Using C++ to alter the code for the Kinect was completely new to me. I had a very basic knowledge of programming, so I understood Stephen Davey's explanation to myself and Matt about how to setup a new CryEngine Flowgraph node and how to write the code to run them. This was the starting point for our development. I had to sit back and let Matt work through the start of developing our gesture detection node, but once I had seen him do some work on it, I was able to take over and contribute my own parts to it.
Our final node looks like this:

This node took part of one Stephen Davey already had written, which was the top section gathering the walk speed, turn amount, jump, leaning and pointing mechanics. We then added on parts that gave us the output information we needed. The positional elements are the ones with the pink boxes for outputs. These give us the vector coordinates for each of the parts of the users body and allowed us to create a tracking skeleton. The blue boxes are booleans, and show whether the Kinect is tracking someone, and whether they have fallen over. While the white second from the bottom is our gesture detection, and counts a value within the C++ code and outputs that as a float.

The first step we took on developing our system was making sure our tracking system was working properly. This video demonstrates the first joint we put in, the hip joint. It tracks the hip joint of the room, and showed us that the orientation of the axis was different to the Kinect than it was in the CryEngine, as Matt's relative distance away from the kinect (Z axis), changed the balls height (Y axis). This was easy to swap around once we realised.
Once we had this first positional data working, we got to work putting the rest of the person in.
We decided to use balls to represent the parts of the body because we couldn't get an actual person in the game to move.
Throughout this part of the development, Stephen suggested that I record some of our work using ChronoLapse, a program that takes screenshots of your computerscreen every X seconds and then stitches them into a video. Here is the first one I took. It shows me doing some work in C++, then compiling. Most of our time in this part of the project was spent compiling.

We had two focuses for our interaction, gesture based systems and positional based ones. For out positional based systems, we started out by looking at altering something based on the relative position of the users hand and the object itself. This video demonstrates this, by resizing a ball based on how close the hand is to it.
This system was the basis for what our moving bench and moving toilet systems were based on.
The next test we ran was the moving toilet one. I set this one up using a sideways door to simulate a bench. During this stage of the development I looked into the ergonomics of seats. Regular seats have a specific ratio of the persons height to seat height to make it optimal for comfort and not putting strain on a person. This translates into a specific bending angle of the knee, which for a regular seat is slightly over 90 degrees. But for a toilet this angle is shortened to just less than 90, optimally around 80 degrees. This angle is taken when the user is sitting comfortably with their feet planted naturally on the floor infront of them. This system demonstrated below takes the leg height of the person and adjusts the seat to match this ratio.

The area of our interaction that I was most interested in was the temperature control. We used a gesture for this one, which was set up so that if your right forearm was horizontal, your left hand then alters a value by being above or below your waist. This initial test took the value and adjusted the scale of an orb, but this was later changed to a block that scaled in height.

This was the extent of our basic work before we began to impliment it with the work of our bathroom design part of the group. Those guys gave us a fully developed level in which we then brought our flowgraph and linked it up to everything we needed to.
The video below is a timelapse of us setting up some of the interactivity in the finished bathroom.
This next video is one I made as a backup for our presentation. It demonstrates all of the interactive elements of the bathroom, and annotates how they work.

Below are images of our final flowgraph. I'll try to explain what each section does, although it may be difficult to tell just from the pictures.
Full flowgraph
 This section takes all of the output coordinates from the Kinect and creates positional vectors from them to input into the balls that represent the player.
creates positional vectors
 This one assigns those vectors to the balls. I laid it out like a person so we could easily determine which entity related to which body part. This actually became very useful later on when we were using these entity names in other parts of the flowgraph.
assigns the entity positions
These next few all relate to the light controls, and their UI messages.

The math behind the light brightness




 This section shows the area of the flowgraph responsible for the gesture controls for the shower temperature and the UI messages.
Shower temp controls and UI.

As well as working on the Aruino and Kinect/Crysis work, I was incharge of creating the documentation for the programming side of things. This involved the screen captures using ChronoLapse, as well as creating the demonstration videos of our work in progress. I filmed Matt interacting with the Kinect on my phone, while using a screen capture program to record CryEngine. I would then edit these two videos together and upload them to my youtube channel so they were available to the group to use. I also took a number of screenshots of the flowgraph and C++ code to share around.
This all culminated in the final demonstration video that I created incase our system failed on the day of presentation (which it did, but that was a CryDev problem not ours so it was postponed). This video demonstrates all of our interactive elements as well as giving an annotated explanation of them all.




 Individual Development
Interaction development:
For this project the interaction was based both in C++ and in the CryEngine Flowgraph, and so it was essential to learn to properly use both for the logic operations we needed to perform. I learned a lot about the C++ language and its implimentation throughout the course. The main thing I learned was the proper way to actually structure and setup a piece of code in order for it to work. This was specifically important for the Kinect project because unless every part of the new node you try to create is structured properly it won't work.
In addition to this, trying to setup the gestures really showed me some new things with coding that I hadn't ever thought about. For example we used a counter within a piece of code to count how long something was true for. We used refresh frames for the kinect system to measure it, and then a large number of "if" and "then" statements to determine what to do afterwards. This ended up making a huge loop that would reset the counter if it was broken. This sort of logic was something I would never have learned if not for trying to do this sort of gesture detection.

In terms of the CryEngine Flowgraph, I already had a fair amount of experience in using it, but never in conjunction with another system like the Kinect. I had never used actual mathematics within the flowgraph, which in this case was essential to get the system to work because we needed to constantly be calculating new variables. For example to add two vectors you need to use pythagoras' theorem to calculate the new values, which ends up a mess of nodes when implimented in flowgraph.

Working with Matt on the coding side of things was brilliant, because of his huge amount of background knowledge with programming. He helped me to learn a lot through this project, and I definitely wouldn't have been able to do anywhere near as much as I did without him. I found it interesting when we got to the Flowgraph side of things because he kept thinking in terms of coding, which doesn't translate very well to the flowgraph even when it makes logical sense. This is because you can't have things like loops and conditions in the flowgraph like you can with a piece of code. I'm very good with logic, so on a number of occassions I was able to think of ways Matt wouldn't have, purely because I wasn't thinking about them in a programming sense, but in a more general way. A good example of this was using the gate nodes to properly trigger booleans.

In this picture, the gate is the one flowing into the large collection of nodes at the bottom. This turned out to be a good work around to some problems we were having with booleans triggering.

Intellectual Property:
 I chose our group's presentation topic for the group, based on the idea that our project was taking a large number of other people's intellectual property and utilising it for our own gains. I thought it woudl be the most fitting subject for our project, and would be interesting to look into how to properly go about using other peoples property, as well as protecting our own. My area of research for the presentation was into the different types of IP that our group was using that belonged to other people, and how to properly go about using them. This had me look at the licencing agreements for the different IP that we were using, and gave me a good idea of how to use them for non financial gain as well as commerically.  This side of the project also became very relevant to me as I had to investigate the same area for another course weeks after, and this gave me some very specific information to take with me to that project.

Collaboration:
 Our course seems to favour group work, so we were no strangers to collaboration, especially in large groups. For this project, we split our group into two smaller groups, myself and Matt working on the programming and interactivity side, while Laura, Dan and Siyan worked on the modelling and visualisation side.
I've worked with Matt on a number of projects before, and knew from the outset that we would work well together. Our main hinderance with the project was that his computer wouldn't run CryEngine because of a windows 8 limitation on the program, and the fact that working on one piece of code on multiple computers would prove impossible. This saw us coming into uni multiple days a week purely to work together on this project. This turned out to be very benefitial to the both of us, because it let us bounce ideas off each other, and when one of us was struggling with a problem the other would usually be able to help solve it. Having the two of us also meant that one person could test the kinect system while the other was editing it, which proved to save a lot of time.

With the group split in two, we had a very disjointed start to the project, because the first part of each groups work was very unrelated to the other. During this time we had very little contact with the other group, which as the group leader I should have rectified much sooner than I did. However, once we reached the stage where we integrated our Kinect interaction with their level, we became a much more collaborative team.
The design side of the group set up a large amount of detail on the wiki which was amazingly useful during the later stages of the project, especially in setting up our final presentation. They had all the plans for the bathrooms which allowed us to map out where to place everything, as well as deciding upon where the kinect and CryEngine camera view should be placed.

What I would do differently
I'll split this section into two. Firstly, what I would do differently as a team leader, and secondly what I would do differently as a member of the team.

As leader, I feel like I should have taken a much bigger role in organising the team at the beginning of the project. If I had done this it could have saved us a lot of time and allowed us to be much more efficient.   I would have set up a very clear list of deliverables that we expected to have accomplished at the end of the project. As it was, our deliverables changed dramatically over the course of the project, possibly because they weren't locked down at the start.

I would have also made a much bigger effort on team cohesion. The way we split the group into two worked well enough, but it left us not knowing what the other side of the group was doing most of the time. If done again, I would spent a lot more time making sure that the entire group was interacting with each other the whole time, possibly by splitting the roles differently. For example we could have all been involved in little parts of each side on a weekly basis. While this might have slowed things down because we would be all over the place, if it was structured strictly it could have made us work better knowing what everyone was up to at all times.

As an individual within the group there are a number of things I would have done differently as well. Firstly, I would have been much more diligent about documenting my progress in the beginning weeks of the project. The first milestone submission was a good kick in the right direction for this and forced me to become much more aware of recording progress, both through video and written.
Secondly, I would have, if possible, done more research and individual work on the C++ side of things. As it was, I learned most of what I did through watching or learning from Matt. It would have been nice to have a chance to delve a little deeper into it all.


Summary
At the end of this project, we have created a very effective and easy to learn system for controlling different aspects of a virtual bathroom. While this project would only serve as a proof of concept, I feel like it definitely proves that this sort of system could work. It is all very well documented on the wiki, and through the personal blogs of the members, which act as a resource for anyone interested in looking into these sorts of systems. I feel like this is a project we would be able to take to a much higher level were we funded by Caroma to actually try and produce these sorts of systems for real life purposes. I can see these sorts of systems being in everybodies homes in 10 or so years, and it would be fantastic to be part of the reason why they are there.






Wednesday 29 May 2013

Week 12

We spent this week getting everything ready for next week's presentations. We did this by finalising all of our interactive elements, as well as preparing for our group presentation.

Lights:
On Russell's suggestion we implimented another gesture based interactive element. The best suited thing was to make the lights based off a gesture, so we set it up so that the lights are adjusted by holding your hands shoulder height and moving them equidistant from your head. Once your hands are no longer at your head height, the system locks the variable to the last set one. We also set the same system up with the mirror lights. Walking up to the mirror switches control over from the main lights to the mirror lights, and the system works in the same way from then on.




Russell also suggested that we add more direct feedback through the UI about what is happening and give some better instructions on how to use everything. I spent most of the studio class setting up these UI systems, and they all work nicely now, giving proper feedback. I had to make them all appear in ordered parts of the screen so they wouldnt overlap eachother if there were more than one being triggered at a time. For next week's presentation we will be coming in the day before to set up an area within the class room to use for our demonstration. We plan to set up a fake bathroom using tables as walls, and masking tape out the different sections of the room, representing things like the bench and shower. This will be the main part of our groups presentation, as we felt that giving a live demo of our system would be more benefitial than spending the entire time talking. Incase the system doesn't work during the presentation I also made a video to demonstrate all of the interactive elements as a backup.



As Matt is always the person in the videos we make (because I don't like being filmed), I've been the one filming them and recording the screen. I then edit these myself using Sony Vegas and overlay the video of Matt onto the screen capture. For these last few videos I also added annotations to explain what is happening.

Wednesday 22 May 2013

Week 11

This was the week in which we combined the seperate work that our group members had been doing into one cohesive project. The team working on the modelling and visualisation side of everything gave Matt and myself a copy of the level, which we began to add in our Kinect interactivity. The initial step for this was choosing the right bathroom to use, as they had modelled three sizes, small medium and large. I decided that the best one to use would be the largest, as it would be a lot easier to use this bathroom as an example for our real world demonstration for our final presentation, and the group agreed.

Myself and Matt spent most of the weeks tutorial, as well as most of the next day working to impliment the interactive elements from the Kinect.

Temperature:
At the moment, the temperature control is the only one we have setup with an actual gesture. Like in our earlier demonstration using a ball as a substitute, the user holds their right arm horizontal to activate the control, then by holding their left hand above or below the waist, they can control the variable. I setup a red rectangular prism in the corner of the shower to represent this variable. I thought this would be the best way to show the changes. Our other option was to increase a steam particle effect, but I thought this would give a much clearer interpretation of the effect of the gesture control. This was the easiest of the interactive elements to connect up because we already had it running earlier, and only needed to change what the variable controlled.



Height control:
The height controls for the sink and toilet turned out to be one of the hardest things we worked on in the entire project. Our earlier creation along these lines was the "bench" (sideways door), that would adjust it's height according to the users knee level was a simplified version of what we set out to create here. The early test had no constraints, but we wanted to put a lot of constraints on these objects. One major concern was a maximum height, which was easy enough to limit in the flowgraph. The other main concern was only activating the controls when the user wanted them to. For this, we decided to use a proximity test. If the user is within a certain distance to the sink or toilet, the flowgraph will activate the part of the graph that controls them, otherwise it remains at it's last set variable. We set this up after Russell had a look at the work and suggested that their constant moving would wear out the parts much quicker in a real life version, so stopping it when it's not needed would save this wear and tear. Additionally, it stops them from moving when for example you bend down to grab something, or the sinks moving when you sit on the toilet.



Light controls:
At the moment we have the lights setup so that they will activate when the kinect recognises someone in the room. This seemed like it would be easy, but turned out to be difficult because of the way the kinect works. Because of it's limited range, it is difficult for it to see someone on the outskirts of the example room from where we have it situated. Also, when the kinect loses sight of someone, who for example has left the room, it leaves the last set of data it registered in the system, so the orbs we use to represent the player stay where they last were. This means that the lights will mostly stay on, even if you leave the room and the Kinect's range.
We also setup controls for the mirror lights. They are setup similarly to the toilet, and will turn on when the player moves near them.

Emergency fall detection:
One of the earliest ideas we had was a test to determine if the user had fallen over, and if so, call an emergency or family line. We found this to be a really important test due to the entire project being aimed at the elderly. One problem we kept running into was that when the user lays down horizontally, the Kinect loses it's ability to properly track them. The coordinates sent to the orbs jump very randomly, and this can sometimes turn off the trigger. We went through about three different tests to get to one that works reliably. It tests to see if the waist height is within a certain limit of the head, over a period of time. This counteracts things like bending down to grab something, which would not be within both the time and position limits.



This week I also created a number of videos. I very much dislike being filmed so I had Matt demonstrate our work, while I filmed it. I then edited it together, using both a video I took of him demonstrating and a screen capture I took of the interactivity working in CryEngine. I synched these and edited them, then uploaded them to youtube.

Tuesday 21 May 2013

Renumeration presentation review - Vivid group

The Vivid group were the last group to do their presentation, which was on the topic of renumeration. I have very mixed feedback for this group because half of the group presented well and half not so well. The ones that presented well spoke very clearly and were as engaging as you can get for a presentation on renumeration, although at times they did speak in too much detail, taking up far too much time. The other side of the group gave off the feeling that they were very unsure of what they were talking about, and provided far too little detail. The half of the group that spoke well did so by reading their notes to the audience, rather than just reading their notes. They were able to elaborate and properly explain things rather than just reading a page full of large words to impress us like some other presentations have done. The other half of the group spent most of the time reading their notes off the page and didn't elaborate on anything. The written part of the presentation definitely had thought put into it's structure, as each section flowed pretty well onto the next. It's only flaw was that it sometimes had far too much on the screen at once, so it became a little difficult to take it all in at once. There was also a lot of jargon that was thrown around without being properly explained to people that hadn't researched the topic for a presentation. The examples they provided were very well explained. Especially the case of using Pat's real world pay slips to demonstrate what they were talking about. One thing I would have liked to see was their project being reflected in the presentation. Their project is the only real world project with a budget and an end goal, so it would have been nice to see some of that come out in the presentation for relevance.

Tuesday 14 May 2013

Group Presentations - Conflict

DCLD: DCLD were the first out of the two groups to give their conflict presentation. Overall, the presentation was very full of information, but it's main downside was that it had far too much text on the screen. This was a common factor amongst all of the group members sections. The written part of the presentation had lots of details, but in some cases this caused a problem as they were expecting to get too much information across at some points. They also had an issue with using too many lists. On a majority of slides, all the information would be listed, in full detail, and they would simply run through this series of dot points one by one. This gave the presentation a very disjointed feel as they bumped from one topic to the next without any kind of transition. The oral side of the presentation suffered due to the nature of their slides. They had all the information they wanted to communicate to the audience written down on the slides, and then simply read off them. A lot of the time I found myself reading the entire slide quickly then losing attention to what was being spoken about because I had just read it. A more affective way of presenting would have been to have the dot points on screen contain only snippets of the information, or headings, which would then be elaborated on in the spoken component. There was also a sense that the group wasn't particularly organised for their presentation. I got this from the way they changed between the person who was speaking. They would throw the presentation over to someone else who seemed like they didn't know where to pick up. This made each section of the presentation seem like it was disconnected to the rest. The images they used were pretty decent, although a bit small or full of words sometimes. The flowgraphs seemed like they were very relevant, although it was hard to make out the details due to their size. As far as I could tell, all the images were referenced, although again they were too small to read. These problems would be negated if I had a copy of the presenation, but being in the audience as it is delivered, you cannot make out the details. Kinecting the boxes: Kinecting the boxes presented second on the day. The written side of the presentation was well done. Like the last group they used plenty of lists, but not quite to the same extent. There was definitely more of a flow between the topics, so it seems like they did think through the flow of their presentation and how one person would hand onto the next and link their subject matter. They used plenty of examples to demonstrate what they were talking about, although they were somewhat vague and unrelated to their groupwork. While this isn't wrong, it would have been nice to hear about some conflict resolution that had happened within their own group. In terms of their oral presentation, it was significantly better than the DCLD groups. They read off notes rather than the slides, which meant the audience had to listen to the presentation to gain the knowledge. They did somewhat fail to engage the audience though, as they were merely reading and not talking to us. Like the previous group, the images they used contained far too much text, and in some cases, like the flowgraph, were too small to read and took far too long to explain. I found myself losing some interest during these long explanations. They explained their topic well, but it is hard to tell if they understood what they were telling us or simply reading if off the paper infront of them.

Monday 13 May 2013

Kinect interactivity update

This week we've been working on creating more interactivity with the Kinect into CryEngine. We've done a lot of coding in C++ to look at skeletal positions and output these into the game engine. Using the flowgraph we can then use these vectors to interact with the game world.




 This video shows the initial gesture control we've set up. By holding your right arm horizontal you activate the flowgraph, which then determines whether your left arm is above or below your hip. By raising or lowering it you can alter a value that will later be used to change water temperatures in the shower or bath. It could also be used to dim lights or dynamically change other variables in the bathroom.





This  demonstration shows a proximity test. There is a radius around the ball that when inside this, the ball will change sizes depending on how close your hand is to it. This can be used to set lights on and off by moving your hand near them, or to taps on and off.


This test calculates the distance between your foot and knee then scales the height of a seat or bench depending on that variable. This is used to alter the height of seats and benches to better suit the ergonomics of the user. I investigated into proper ergonomics of seats to determine that proper comfortable seating leaves the user with a bent knee just above a 90 degree angle. This calculation should ensure that is the case no matter who sits on the seat.

Monday 6 May 2013

Milestone followup

As an extention to the milestone post, here are example of the arduino kit working as well as the kinect interacting with CryEngine 3.

Simple Motor:
This is a simple motor being turned on and spinning at a set speed. This is a very basic form of how our Kinect rig would spin and move. By altering the variables in the code for how fast it spins we can control the amount of rotation, and by linking the movement up to a different input variable we can determine when to move it.
A copy of the code to create this can be found here.

 Light turning on and off:
 This example shows a light being turned on and off by buttons. It demonstrates the ability to toggle one aspect of the arduino with an input other than just power. In this case it's a switch toggling on then off, for our project if could be input from a an aspect of the kinect sensor telling the arduino to act.
A copy of the code to create this can be found here. This code comes from a basic code that keeps the LED on, and is modified to allow for the button to turn it on and off.

LED dimming:
This one is essentially the same as the last, although the loop part of the code is altered to constantly increase or decrease the brightness of the LED depending on whether the on or off button is pushed.

Light array:
The light array code turns all of the lights on at the same time, while applying an increasing delay on them as it moves along the array, making it look like the light is moving across them all. The notion of a delay in code would be useful to our project both in moving the Kinect, as well as interacting with the Crysis environment through the Kinect, as some interaction would be useful with a slight delay, such as positioning different things in the bathroom to suit the occupant.
The code can be found here.

 Potentiometer:
 A potentiometer is a resistor that, in this case, connects to 5 volts actoss its three pins, and will read a value between 0 and 5 volts depending on the angle it is turned. In this setup, the value is stored and used to determine the speed at which the light turns on and off. The application of this in our project is that it demonstrates how to gather an input, store it's data and then use that to change an output.
The code can be found here.

Sunday 5 May 2013

Individual Major Milestone

Within this project, I set myself a few milestones at the beginning in preperation for this point in the course. The main thing I wanted to do was to further my knowledge of programming and try to focus on that aspect of the project's development. I was also very interested in the mechanical side of our design proposition, with moving the Kinect around a room in relation to a persons position.

At this point in time, I'm in charge of the group, although it wasn't always that way. To begin with, Laura was our leader, but because she wasn't confident in her ability to be in charge of the group she asked me to take over. I have mixed feelings about this for a number of reasons. Firstly, it was probably a good idea because I feel like I would be a better group leader than her, especially if she went into it with the idea that she wouldn't be. In contrast to this, I have a feeling that it would have been better for the group as a whole if she had stayed on as our group leader. At the moment our team is struggling with interaction, mainly due to the language barrier we face with having members who don't have English as their first language. Having Laura as our leader may have negated some of this as she would be able to take charge in their native language where I struggle to communicate with our group members sometimes.
I feel that this barrier we face as a group has somewhat hindered our ability to work thoroughly on our project. In conjunction with this language barrier, I feel that some of the members of the group have a large disinterest in the project, so much that they work a little on their part of it then put it out of mind. This makes managing a large project quite difficult, as it means work is either not delivered or it's delivered at a subpar standard. The only course of action we've been able to take in relation to this is to press on with our work as best we can and not overly rely on imput from all members of the group. The way we structured our project initially means that there are two main focuses for our group, and at present we are able to press on with at least one of these working properly. While this means we may not end up presenting everything we set out to present as a final project, it does mean that we will be able to have a large section of it finalised, be it proof of concept or finished completely.
As group leader, seeing early that these issues were arising, I had to add my milestone the notion of keeping the group running as smoothly as possible. I feel that to a certain degree I've been successful with this so far. A lot of my time in this project has gone towards trying to ensure that everyone knows what they're doing and when they need their work to be completed. Though no matter how much I stress these issues I still find that it is out of my control if the work actually ends up being done or not, which can be frustrating, especially when that work then falls onto others to complete.


Mechanical Design:
From the outset of our project, we split the group into two parts, the design of the mechanical and programming side of things, and the Crysis side. Myself and Matt took the physical aspects of our project and have been working on these since the beginning. Together we discussed idea for how our system would work. Matt designed the initial Kinect Rig, drawing up the plans which we eventually had laser cut into a prototype. Through this stage, I helped a little in talking through the design choices, although most of the work on this front was Matts.

My work on the mechanical side of things has been through the conceptualisation of our system for moving the Kinect across the room. Our initial brainstorming came up with the idea of using a Spider-Cam like system to move the rig across the roof of the room.
This type of system runs wires from each corner of the room to the decide. Motors at the corners extend or contract the wires in unison, allowing the system to move freely. I've investigated into these types of devices to see how they run, as well as how to integrate it into our proposed system for the Kinect. At the moment we are still trying to narrow down how this would work if we were to add it into our prototype, so at the moment it's use is purely theoretical.

Programming:
My main aim for this project was to immerse myself into the programming side of it, as so far I have very little to no experience with that sort of thing and would like to know more about it. The project focuses on programming on two fronts, the Kinect's integration with Crysis, as well as Arduino to run the motor systems for the Kinect.
For the Kinect, we're managed to get the integration with CryEngine 3 working properly with Stephen Davey's help. At the moment what we can do with it is fairly basic, and will probably need us to get into the code of the Kinect and Crysis SDK to achieve what we want it to do. The next step for me will to be looking into these codes and seeing if we can get positional data for each limb on the person being tracked. This will allow us to be more interactive with the bathroom appliances with gestures and body positioning. It will also enable us to do things like changing the height of a toilet seat or shower head depending on the size of the person in the room, two of the major interactive parts we were hoping to achieve.

The other part of the programming that I've been looking at has been the Arduino. Coming from having basically no programming knowledge, fiddling with the Arduino kit has been difficult. I've gone through a learning booklet that Russell provided when he gave out the kits, which has taught me a few things, but at the moment I can't just sit down and make it do something, I need instructions to achieve this. My accomplishment so far is that I actually now understand what all the code is saying, and can figure out some problems when I come to them. I'm very happy with this progress so far, as I was completely stumped by it all initially. So far I've been able to do everything we will need for our rig to work. I've been able to get a stepper motor to move depending on a varying input amount, which would be what drives our spider cam style rig, as well as rotating the Kinect around on it's rig.



So far in this group project, I feel as if I've made progress towards what I set myself out to do at the beginning. I've been able to further my knowledge of programming, as well as how to practically apply that knowledge to our project. I've also been able to help design the mechanical systems that will eventually run it all.
As the team leader, I feel like our team is slowly moving forward, although a lack of interaction has dramatically limited our progress. I feel that if we keep moving at this rate, we will be able to accomplish something substantial, although not at all what we initially set out to do.



Thursday 2 May 2013

Week 8

We collected our laser cut prototype friday of last week and have now assembled it.



We were very happy with the way it turned out, although I feel I should stress this is only a prototype. There are a few issues with this current version that need to be fixed. The main one being that we measured the Kinect wrong, so the brackets that hold it in place are too large and it slips out. This would easily be fixed by measuring it correctly and laser cutting only those pieces again. The rotation mechanism works surprisingly smoothly. The whole unit ended up costing roughly $80, which at the moment is far too much, and could be cut down if we were to do it again.
We looked at alternative mechanisms last week and settled on one specific one that Russell ordered for us from the UK. it can be found here:
http://littlebirdelectronics.com/products/df15mg-tilt-pan-kit-15kg



It's a pan and tilt mechanism that offers two axis of movement. So far it is the only commercially available unit that we've found that looks suitable, so I'm looking forward to actually getting it and seeing what we can do with it.


This week we also gave our groups presentation on Intellectual Property. I feel like it went well. Our main strength was that our group talked about the topic in the context of our project, which the other group that went this week failed to do. The text for our entire presentation can be found on our wiki HERE

Saturday 27 April 2013

Communication Presentation Review - Shades of Black

Overall this group's presentation was good, although there were a couple of areas where it fell down. It was obvious that the group was very well organised and knew what each of their roles within the presentation was. The slides that accompanied the speaking were also well put together. They did not have too much text, or especially the whole talk, but just enough that the audience knew what they were talking about. I did feel as if the pictures that accompanied the talk were too generic, and didn't have anything to do with the group's project, and although they had some link to the topic, they were too generic to really lend anything to the presentation. What everyone said seemed well thought out, although there were a few times were it seemed like the text was a little too verbose. This didn't help the presentation at all, but rather made it seem like they were spilling out large words just for the sake of doing so rather than to explain the topic of communication.

One main concern of mine was that the group tried to tackle the entire idea of communication rather than to pick a small section of it and do it thoroughly. While they did split up the topic into sub topics, these were all summarised quickly by an individual. I think it would have been more substantial to pick one or two of these sub topics and focus the entire presentation on those.

The video at the end of the presentation was interesting, although it did seem to just reiterate the presentation. I did like the interview style the group used, although it seemed too disjointed by each person being in a large variety of seated positions.

Wednesday 24 April 2013

Week 7

Main update this week is that we sent our prototype Kinect rig to the laser cutters here at UNSW.

Design sent to the laser cutter


Sketchup model of the prototype
 The prototype will be mainly cut out of MDF, but with one acrylic part for the movement. We still need to buy some nuts and bolts to put it all together with, but that will be easy.

This week we also began testing the Kinect integration on my laptop. It all ran smoothly after spending a while setting it all up. It is a little limited with what we can make the game character do from the real world sensor, so we might have to rethink both our level of interaction as well as our method of demonstration.

Next week we are giving a presentation on Intellectual Property as a group, so we spent some time this week preparing as a group. Our tasks for the talk are split into the following categories:

Using IP:


  • How to use other peoples IP.
  • IP our group is using and how we should use it.

Creating IP:

  • Automatic IP Rights.
  • Formally Registered IP Rights.
  • IP we've created and how we'd protect it.
 I'm doing the second topic, IP our group is using and how we should use it. This means I need to talk about all the different types of IP that our project involves, and investigate how their licensing agreements and intellectual property protection allows us to use them in the context of our project. The main IP I need to look at is: The Kinect and it's SDK, CryEngine 3, Arduino and Caroma products.


Tuesday 16 April 2013

Week 6

I was unfortunately sick this week and didn't make class, but I talked to the group about where we're all at and what needs to be done. Matt talked with Stephen Davey about the Kinect integration with CryEngine. Unfortunately Matt's laptop runs Windows 8, which CryEngine 3 is incompatible with, so he'll either have to use my laptop or a uni one to do any work on the Crysis side of things. On a positive side, Stephen already has some code that works with the Kinect in Crysis, that allows for character movement by moving your legs, leaning, as well as recognising hand gestures.

The level design side of the group has also presented some of their work today. They have a first draft of the level, with three different sized houses and the detailed bathrooms. The models all look quite nice, although they could do with some more advanced texturing.


Thursday 11 April 2013

Week 5

This week wasnon teaching week for UNSW so we didn't have class. Matt and I talked about or ideas for moving the Kinect and are pretty certain we're going to go for the spider-cam system, but we'll get some feedback from Russell and Stephen next week and go from there.
Matt also worked on developing a design for a prototype Kinect rig. Info for that can be found on our wiki Here.

The other three members of our team have worked on the bathroom designs for the Crysis level, and should have them implimented into the environment by class next week.

Tuesday 26 March 2013

Week 4

In this weeks studio class, we allocated who will be doing what group presentation later in the semester. I chose the topic of Intellectual Property for our group. This decision came about because it seemed like the most relevant topic to our project. We are using a number of different companies property to create our own system, so I thought it would be nice to investigate exactly how we are and aren't allowed to use this property. Our presentation most likely also cover how we are creating our own IP and how we as a group can protect it. This topic is also very relevant to us as students and as future designers, so I think it will be very useful to look into.

As a group, we talked with Stephen and Russell about our project. We looked at what we were planning on achieving as a group and how we should go about achieving it. Me and Matt both feel that our main problem is going to be the Kinect and how we will get it to function within a system running a number of different things at once. Stephen has some code that works with the CryEngine already, but that doesn't necessarily mean that it will do what we want it to do.

Me and Matt talked about how we think we can move the Kinect around the room to track a person as well. This is because of the limited view range of the Kinect, and the possibility that the user will step out of this space.We had a number of ideas, the most likely being a hinge system. This would be the Kinect sitting on a two axis hinge that would sit in a corner of the room, allowing it to rotate sideways and up and down to track someone. The other idea we came up with was for a bigger room, and is a Spider-cam like system that uses four wires attached to the Kinect and then to winches on each corner of the room. These would allow the Kinect to glide across the roof of the room as the winches push or pull the wires in unison.We will continue to look into these systems and try to figure out which one will be the most useful and easiest to construct.

Tuesday 19 March 2013

Week 3 - Back Brief

This week our team presented it's back brief. This is essentially where we take the brief for the project, look it over and write down our interpretation of it and what we propose to have as an end result. I feel as if our back brief was well received.

This brief was posted on our team's wiki, which we established last week as a tool for our members to document our progress through the project and provide a reference for each of our members. The wiki can be found here:
http://geriambience.wikia.com/wiki/Geriambience_Wiki

Back Brief Outline

  • Group Name: Geriambiance
  • Group Members:
    • Steven Best (leader)
    • Dan Zhang (Dan)
    • Jing Liu (Laura)
    • Mathew Kruik
    • Siyan Li (Allen)
  • Group Project: Livable Bathrooms

Project Conditions

  • Project Concept:
This project aims to create a series of smart and convenient bathroom appliances and fixtures, designed for ease of use for elderly people. A Kinect for Windows will be  situated in a corner of a bathroom in order to track resident’s movement and body reactions, allowing it to adjust bathroom devices intelligently. The height and conditions of furniture and equipment can be controlled by residents’ gestures to help with their everyday bathroom activities which could be hindered by a lack of movement.
  • Project Proposal:
In the project, a series of bathrooms will be designed and imported into CryEngine 3, a video game engine. This will be linked to a Microsoft Kinect for Windows, a motion tracking device, which allows a person to move around in a real world space and control the in game character. This allows a demonstration of how the Kinect allows the control of real world appliances and fixtures through gesture control.
  • Specific Intelligent Equipment:
    • Door:
      • Open - When residents stand in front of and face the door, the door will automatically open.
      • Close/lock - When residents enter the room, the door will automatically close and can lock with the use of a specific gesture.
    • Light:
      • Main light – open/close: When residents enter or leave the bathroom, the main lights will turn on or off accordingly
      • Mirror light – open/close: When residents enter or leave the area occupied by the mirror, lights around the mirror turn on and off accordingly. Gesture control allows for the dimming and brightening of these lights.
      • Shower light – open/close: When residents enter or leave the shower area, shower light will automatically turn on and turn off  accordingly.
    • Sink:
      •  Turn on – When users put their hands under the tap, the tap will turn on.
      •  Turn off – When users move their hands away, the tap will turn off.
      •  Up – When users lift up their palms, the water pressure increases.
      •  Down – When users move their hands down, the water pressure decreases.
    • Shower:
      • Tracking – When a resident turns the shower on, the Kinect knows where their body is in relation to the shower head, and will move the shower head to be in the right position for the body, both in height and sideways movement.
      • Temperature:
        • Up – When users lift up their palms in the shower, the water temperature increases.
        • Down – When users move their hands down in the shower, the water temperature decreases.
    • Toilet:
      • Positioning – When a resident enters the bathroom, the Kinect reads their body height and adjusts the height of the toilet to best suit their needs for sitting down.
      • Flushing – When residents stand up from the toilet, the toilet will start flush automatically.
    • Ventilation:
      • Turn on/off - When residents enter the space, the exhaust fan will turn on intelligently and when they are out of the space, the exhaust fan will turn off automatically.
    • Safety
      • Collapse Detection - The Kinect can detect when a user falls down and can alert relevant persons to assist.

Project Planning

  • Bathroom Design
    • Designers: Dan Zhang (Dan), Jing Liu (Laura), Siyan Li (Allen)
    • Design ProposalDesign a series of three bathrooms, containing all the devices to simulate a real world space. After modelling the bathrooms, this 3D space will be imported to CryEngine 3, where it will be hooked up to the Kinect. These bathrooms will encompass all of the fixtures and devices used by the elderly, including those that assist them with disabilities.
    • Design Scales: 3x2m, 3x4m, 4x6m.
    • Design Concepts:
  • Programming
    • Programmers: Steven Best, Matthew Kruik.
    • Programming Concept:
      • Connecting the Microsoft Kinect to a computer, the data is read by the node system in CryEngine 3, allowing full control of the character, and interaction with the virtual space.
      • Additionally, an Arduino kit will be used to allow the Kinect system to track the user, negating it's small range and limited movement.
  • Test Proposal:
    • Outlining an area in a room that equates to our virtual space, the sensor is located in the same position it is in the game engine. A series of boxes are setup throughout the space, mimicking fixtures such as toilets and sink. A user is able to walk through the space and see onscreen how the virtual appliances and fixtures react to them.
    • To take this one step further, we will look into virtual reality devices such as the Occulus Rift, which would allow a much more immersive experience.