UI Choice Design and Application

Tips and Tricks from Steve Krug’s: Don’t Make Me Think

I aspire to become a UI/UX Designer, among other things. This recent project of making a mobile app for New Intelligence where the entire game is essentially UI, would make a great opportunity to learn important theory and application for good UI right?

Steve Krug Dont make me think

Unfortunately I didn’t get to finish the entire book (as of yet), but there are still very important insights that I’ve taken away. Steve says at the start of the book that a lot of what he talks about isn’t going to blow your mind, in fact you probably already know a lot of it. I find that he phrases and presents this already found knowledge in very meaningful ways that really sends it home and makes it blatantly obvious. Like looking at every day things that we see and then getting a microscope and seeing things in a different light. And while this book was originally about Web, it’s been adapted for mobile as well, but can still transfer the principles to general UI design.

A few notes.

The number 1 rule, which is also the title of the book “Don’t make me think”. Making a page self-evident is like a store with good lighting, or in this case, good level design with good lighting, guiding users. And a prime example of this, was recently when I started to play Black Desert Online. There’s no tutorial on the UI, and with the amount it shoves in your face, it needs one. You get dropped in the deep end and start drowning. Well that was my experience anyway. And this is coming from a veteran of World Of Warcraft. The UI is same-same but different. Same conventions, but BDO has a little bit more complexity and depth.

I don’t expect you to watch this video, just mute the audio and have a flick through to get a feel for the amount it shoves in your face. And by no means does this video cover it all. My point is though, Black Desert Online, even though using general conventions, had so much complexity and wasn’t easy to decipher, it was constantly making me think. To the point where I was actually getting frustrated. To the point where I stopped playing. Call me crazy, shake your head and tell me that doesn’t happen, but it does. I’ve installed countless apps where I’ve done the same because of bad UI design. It leads (but does not always) to bad user experience. And as Steve says “If there’s good navigation, users have a good impression and start to trust”.

  • Don’t re-invent the wheel! Use Conventions – It’s a risk vs reward type scenario.
  • We parse visual hierarchies so fast. We only recognise that we do it when we can’t do it.
  • Work most magic at a glance, most people spend less time looking at it than we think.
  • People rely on cursor change to tell whats clickable.
  • Highlight where we are in navigation – in tabs too.Tab hightlight
  • Avoid wall of words.
  • Design for scanning – not reading.
  • Take advantages of conventions.
  • Create effective Visual Hierarchies.
  • Break up pages into clearly defined areas.
  • Make it obvious what’s clickable.
  • Eliminate distractions.
  • Format content to support scanning.
  • Every page needs a name.
  • More importance = more prominence.
  • Prominence, grouping, nesting.
  • Consistency – BUT – CLARITY trumps consistency.


  • Useful – Does it do something people need done?
  • Learnable – Can people figure it out and know how to use it?
  • Memorable – Do they have to relearn it each time they use it?
  • Effective – Does it get the job done?
  • Efficient – Does it do it with a reasonable amount of time and effort?
  • Desirable – Do people want it?
  • Delightful – Is using it enjoyable or even fun?

Trunk Test

A simple test to perform on web sites, possibly within games or apps.

  • What site is this?
  • What page am I on?
  • What are the major sections of this site?
  • What are my local navigation options?
  • Where am I on this page?
  • How can I search?

New Intelligence 1

The main screen is comprised of a minimal amount of things. The title of the game/company name. The game doesn’t actually have a name, it’s just called the ‘New Intelligence App’. Because like Krug says, every page needs a name and the more importance the more prominence. Although it is the newIntelligence logo, it’s a branding thing, this is the app for their company, plus it tells you what app you’re in. Originally there was a screen before this that had the NI logo and a button that said start which would bring you to this screen. It was removed because – when a user opens the app, we don’t need them to ‘start’ the game, they just opened the app, they’ve already started. Having the start panel/page there is a needless and time-wasting interaction and it didn’t add anything to the game.

It also has nested among it a picture of a ‘head shot icon / profile’, rather than the word ‘profile’. Which can be conventionally identified to have the functionality of a profile and the details in which it may contain user specific information. When it comes to scanning, not only does the icon take up much less valuable screen space, it can be condensed to a very precise location that works within the limitations of the banner space and more hastily and easily identified. Scaling text large enough so it’s legible but to fit within that small amount of space would clutter the banner and detract from the prominence of other elements. Scaling the text too small so it doesn’t clutter the rest of the banner and still works to the prominence and grouping conventions, but makes the word illegible.

The last element on the screen is rounded PROSPECT model that unfolds each time a user enters this panel. Each of the prospect model letters are a button that navigates into the appropriate panel. “Transformation is the most discernible, largely because it stands out. A ‘submit’ button changing shape to become a radial progress bar and finally changing shape again to become a confirmation check mark is something that we notice. It grabs our attention, tells a story, and has completion” (Willenskomer, n.d.). The intent behind having the PROSPECT model unfold is to grab attention and make users want to interact with it.

How do we read? From left to right. How do analog clocks display the time? The PROSPECT model unfolds from left to right, so as conventions have already taught – not to mention in the ways that the letters unfold, users should already understand how to read this transformation. I’ve they’re using the app, they’ve also already undergone training with New Intelligence and understand what the PROSPECT Model  is. Although they are only circles with letters in them, nothing up until this point has led users to understand that these letters could be pressed. Among only having the game title and a profile picture, there’s not much else to draw attention, at this point everything could look like it can it pressed, everything could look like it can’t.

The only difference between the letters of the prospect model and the profile icon (which are the only interactable elements on this panel) is that the profile icon doesn’t have a backing image to keep it consistent with the conventions of what other interactable elements are in the game. The reasoning behind that was because as I mention in my post-mortem, I had to step away from this project for a time and someone else took the reins of lead UI designer (Josh). I’ve asked Josh and his reasoning was he was doing less outlines in general. He updated all of the conventional banner buttons (home, back, exit activity, profile) to be just the necessary icon. The touch zones are larger than the icons to allow a larger space for touch functionality. He said it just seemed like unnecessary contrast when we already have the white bar and it allowed the icons on the buttons to be bigger.

New Intelligence 2

When inside of any part of the PROSPECT model, the colour of the background is the same as the letter. Although it’s a straight panel swap because time didn’t allow for super juice, I think this is a fantastic and easy way to identify where the user is within the app due to recent selection. It eliminates the need for a title “YOU ARE HERE” and the colour contrast works really well. If we had more time, it would have been really nice to be able to transition between those panels through watching the PROSPECT model button’s colour fill the screen and the elements transition in and out where needed.

But much like the last screen, not much has changed. There are buttons to choose from to take the user into an exercise, and the profile button is still in the top right. In face that profile button’s location never changes and is always there. Consistency and clarity. Wherever the user is in the app, in the top left of the screen (or the left side of the banner) is a button that takes them to the previous panel. Like the back button for and internet browser (the most used button). Except within the app the icon changes according to where the user is. If in an activity like the photo above, it’s a home icon to represent going back to the home page. If the user is in an exercise like the next photo it’s an X icon to represent exiting that exercise. The functionality remains the same but the visual distinction is different because it has different consequences within each panel. And the icon’s are easily recognisable to reduce the amount users have to think.

New Intelligence 3

This is one of the different types of exercises within one area of the PROSPECT model. In each activity the banner has an exercise text to instruct on what the aim of this activity is. Underneath that is how to users are to interact with whats presented in order to complete the exercise. In this exercise there’s a blue line to separate where the answering zone is, and where the options are to make the answer. Answer zone is always on top, answer options always on bottom, and this is a convention carried through the app, with the exception of some special case exercises that can’t use this convention. The answer slots are greyed out the same shape as the answers.

Unlike in Duolingo like I mention [here], we’ve made sure that the first word of each sentence isn’t capitalized to reveal which answer it is.

And without describing every single process we’ve gone through to construct the UI in this New Intelligence app, I think that’ll suffice on some of the techniques I’ve learnt and the application in this app. These are very strong foundations and I plan to keep using these in future projects.

Thank you for reading, and I hope that some of the things I’ve discussed or learnt to help me in my journey also help you.

Until next time –



Willenskomer, I. Creating Usability with Motion: The UX in Motion Manifesto. Medium. Retrieved from

Leave a comment

Posted by on May 1, 2017 in Uncategorized


New Intelligence: Project Methodology

Critical Chain / Agile hybrid

Also known as Critical Path. We combined the critical chain methodology with the agile methodology.

What is Agile?

Agile management, or agile process management, or simply agile refers to an iterative, incremental method of managing the design and build activities of engineering, information technology and other business areas that aim to provide new product or service development in a highly flexible and interactive manner; an example is its application in Scrum, an original form of agile software development.[1] It requires capable individuals from the relevant business, openness to consistent customer input, and management openness to non-hierarchical forms of leadership. [1] The Agile Manifesto, is centered on four values:

  1. Communication with parties is more important than standard procedures and tools.
  2. Focus on delivering a working application and less focus on providing thorough documentation.
  3. Collaborate more with clients.
  4. Be open to changes instead of freezing the scope of the work.[2]


What is Critical Chain?

As opposed to waterfall and agile project management, that focus more on schedules and tasks, the critical chain project management methodology is geared more towards solving resource problems. Each project has a certain set of core elements, called a critical chain (sometimes referred as critical path), that establish a project’s minimum timeline. The critical chain methodology devotes adequate resources to this critical chain while devoting enough resources to other tasks such that they can run concurrently, but still have enough of a buffer to reassign resources when needed. This setup is ideal for resource-heavy teams, or for those who have enough flexibility in their team members’ respective skill sets.

Going in to create a serious game for NI, we had the mindset of using an agile project methodology. Purely because this is the first time that the team and I had “worked” for someone else. Although NI were our ‘client’ and seeking our expertise, they are the ones who know the content inside and out and what the apps intention is. Ultimately we are following their lead, and they ours. Leaning on each others expertise. It was never going to be a straight forward project for so many reasons that i’ll get to talking about in the post-mortem. We knew there was a start and end date, but the in between was bound to change. We were never going to know the exact timeline so it needed to be flexible. We’d also have to account change for collaboration or meetings between New Intelligence and the team. We ended up adapting and morphing an agile methodology with the critical chain methodology. We needed critical chain because we did have core elements that needed attention, but the the timing of which changes. Also because Agile puts more of a focus on delivering a working application than documentation, but this project still relied VERY heavily on documentation.

  • A – So we knew what we were doing.
  • B – So we didn’t forget what we were doing.
  • C – We needed to figure out the systems and what exactly is going into this app.
  • D – We needed others to understand what we were doing.
  • E – What if we were going to continue working on this after the delivery? What if someone else is?

The critical chain methodology helps us identify the most urgent task and work towards is. It also helps us identify deadlines that we need to work towards and set focus on that. We know that there are milestones and that those milestones might change. The milestones content might change, or the time of achieving this milestone might be pushed forward or back. Critical chain also helps us adequately assign our valuable resources to work towards specific outcomes whilst still assigning resources to other tasks that can progress side by side without depending on each other to progress forward.

The projected timeline

Initial Project Timeline

To wrap up my points above, this has been the first project where I’ve actively stepped down from the project management role. However, I’ve happily shared all the tips and tricks that I’ve learnt along the way, to help better enrich the knowledge of our project manager in this instance. Ultimately it’s his say on how the project will be run, and what our approach to tasks and deadlines will be. My personal mentality is that, he says we’re doing it this way, and I say “okay”.

Until next time –




5 Effective Project Management Methodologies and When to Use Them. (2017). Retrieved from

Agile management. (2017). Retrieved from


Posted by on March 31, 2017 in Game Dev


New Intelligence: Tools & Paper Prototyping


The gang of designers

Over the last few weeks the design team and I have been exploring different areas and tools to help us in the process of creating the serious game for New Intelligence. After we’d done the training course and analyzed a few different types of games and how they present information we came together and found the commonalities between what NI thought needed the most attention and what we thought needed attention. One of the tools that has actually been extremely handy to have is a giant whiteboard (which is the size of a wall) to quickly dump a large sum and variation of information.

IMG_2964After we identified the commonalities we all ended up having a half hour rapid brainstorm session. The brainstorm session was to jot down as many different ideas that we could, that was the summation of all of the content we had learnt and what needed to be in the app but in-game form. This was the time to start transferring all of this knowledge into activities that can played/used by players in the app. We wrote down our ideas on post-it notes. Post-it notes worked really well for a few reasons.

  • They work well in a collaborative work space. You can all end up putting them in a specific area within a room like a pin board for everyone to see while brainstorming and afterwards.
  • They limit the amount of information you can put on one, so it ends up being a more condensed version and somewhat easily decipherable by others.
  • Because they limit the amount of information it means there’s less time spent sitting on a single idea, it almost forces you to push onto another.
  • It makes you constantly move to put the post-it note onto the ‘pin board’. Makes you stimulate body and mind. I find that this gets all kinds of creative juices flowing and improves workflow.

The result:

This slideshow requires JavaScript.

Then after that, we each wrote on the whiteboard which ideas we liked best out of all of them. As well as another consolidation of the most frequent ideas that came up.

This slideshow requires JavaScript.

After we had commonalities, we assigned particular tasks and split up to start drafting up how particular exercises on post it notes and how they’d work within the app. To testing if any of these ideas could possibly work or what they felt like to operate. And much like what we’ve done in other stages of game development  is paper prototype!

This slideshow requires JavaScript.

There were large sheets of paper to test on and post it notes along with a whole box of goodies that can used for paper prototyping. In Adam’s case, he was using some of the large sheets of paper with lots of goodies from the box. Having physical objects that can moved around, simulating what the apps screen might look like. Us others stuck to post-it notes, because these sized post it notes are almost the same size as what 2016-2017 modern-day mobile screen sizes are. If we could fit our prototype content onto these similar size post-it notes, we know it could translate onto phones in a similar fashion. If it fits onto these sizes, it’s only going to be easier to interpret and decipher on larger screens such as tablets.


Some of my paper phone prototypes

Eventually it came to a point in time where we should start seeing where these elements fit into the app and how the app flowed. This was/is another ongoing conversation because this is iterative design. It was back to the drawing board – literally. We started to figure out how and where each of the game elements worked. Using the bit white board ‘wall’ as a plotting space to compile all of the common aspects and activities we wanted to be in the app.

This slideshow requires JavaScript.

Once there was enough knowledge on how the app could flow and some of the prototype content could fit in it was about time to move onto testing how this would actually look/feel/operate on a digital device. So luckily being in the 21st century and having technology at the finger tips, instead of blasting straight into unity which is what we’re pretty much used to doing, we took another approach. The internet has plenty of places to do “proper” or “mockup” UI designs. And one of the greatest things about using one of these sites is that if we made this quick prototype in unity it might have felt like this was the beginning of design. Sure it could be iterated on but we might have been afraid to discard it. Being on an external site means we have to discard it. Rather than sticking with the first thing we try, explore different ways of doing things. Here’s a few sites that we tested that enabled us to do what we wanted for free:

Now without going into great detail about whats so good and bad about each of them I’m going to do a quick summary. They all do what they’re supposed to. Some have minor differences in aesthetics and what it enables you to do or what is available to use. Some were Mac only. The one that stood out was Marvelapp. For a number of reasons.

  1. Multiple people can be invited to the project and work simultaneously.
  2. There’s an app that you can download and view your project on.
  3. Within the app you can play through your design.
  4. The app is free also.
  5. You can send links to people to play through your design!
  6. You can record exactly what users do!
  7. It updates live (when there aren’t any syncing problems).
  8. It updates live on mobile devices (when there aren’t syncing problems).
  9. It can be used on both PC and Mac.

Here’s a list of cool features that it can do. It lacks a few features that makes the difference like simple copy and paste or drag and drop functionality in play mode. Parts of the simple functionality was missing which makes it a little more time-consuming to make it do VERY simple screen changes which gets the idea across, but the app itself makes accessibility a breeze. But when it came to sending the link to NI and getting them to run through the app in its most basic form and to get the message across and for them to understand it confirms it was the right choice.


Until next time –



Posted by on March 23, 2017 in Uncategorized


The Choices Within: My Friend Game Design Ethics

In recent weeks I’ve had discussions with some of my colleagues about ethics in game design. We each jumped onto and ran through a couple of scenarios. These specific instances aren’t tied to game development at all. They present particular scenarios and options to resolve them. At first I was expecting them result in particular ways in which gave me a massive novel of ‘how I did’ and the implications behind the choices I made. But it didn’t. Throughout it gave me feedback on the decisions I made and told me whether some of my choices conflicted with earlier ones,  whether I was consistent in my choices. Essentially, it tests for what you believe is right or wrong, or just what you believe in.

Dean Takahashi said:

Each person’s definition of what is ethical changes.

Everyone grows up in different environments, with different people, different conditions, different life experiences, different teachers, different perspectives, different religious views which all culminates, ultimately to different beliefs. So to me, ethics isn’t about being right or wrong, it’s about what we believe is right or wrong. It’s where we draw the line. To me: Ethics only matter when a living form is impacted. Whether it’s you as the individual, or someone else, or the resulting action impacts a living form. Humans or animals or both.

The IGDA (International Game Developers Association) has a Code of Ethics. If you haven’t read it yet, please do, it’s really straight forward and covers the basis of a lot of topics. I’m not part of the IGDA but their 3 sections, more so section 1 and 2, are immediately applicable and processes that I already currently follow and have followed  because they tie very closely to my beliefs.

So in saying all of this, in my game development journey/experience so far I have never been asked to do or create anything that has pushed me to that line. That threshold of what I think is right or wrong. Nor have I asked anyone to do something for me that pushes them to their line. I’m up to the point in my career where I’ve fumbled around in the dark enough to grasp the basics of game development and the tools to required to make ‘games’.

I’ve found my footing.

Now is where the games I create are truly starting to shine as a cohesive whole. Rather than ‘here’s some mechanics I tried to make‘, it’s ‘here is an actual game‘ or ‘here is an actual game that provokes a particular experience‘. Most of which don’t (not my intent) or shouldn’t conflict with anyone else’s thresholds. For example – A game called ‘TeTron‘. A hovercraft in a futuristic Tron’ish looking space where you collect tetris blocks and deliver them to a black hole. Nothing controversial right? But if in any case you as a reader have ended up playing TeTron and found anything misrepresented I’d be more than happy to talk about it. It’s a dull example but it leads me to my next point.

Feeding The Forgotten

“With inspiration drawn from my recent travels to PAX Aus in Melbourne (2016) and other travels within and around my hometown of Brisbane, I wanted to put the player in the shoes of a person who treats everyone as if they were equal. With the world what it is today, we all have the power within us to help those who are in need, or less fortunate. And in the process hopefully inspire others to do so too. With fictional characters who have real world issues, I wanted to portray these characters for who they are, as people.”

Feeding the Forgotten is one of the only game’s so far that has required me to properly consider ethics in game design. This was a game where I was constantly jumping back and forth over my own threshold of what I thought was appropriate. Especially because this is a product for consumers. My opinions that are presented in this game will ultimately be viewed and digested by these consumers. But not only that, the representation of everything within the game too. The representation of elements in the game are purely the only bridge of communication between what I was trying to get across. And the representation of which needed to be a good medium of my intention but also a fair and accurate representation.

In the IGDA code of ethics – Section 1 point number 7.
Strive to create content appropriate for our stated audience, and never misrepresent or hide content from committees assigned to review content for communication to the public, and specifically we will work strenuously to cooperate with and support local/regional ratings boards.

Never misrepresent content.

That’s exactly what I wanted to do (not misrepresent content). I’m representing artificial human construction’s that mimic real interpretations of the concept of people who don’t have a house to call their own. As well as constructing stories based on real life issues that contribute to putting these people in the positions they’re in. To me, this is a very delicate subject, because this is the representation of some peoples lives. People actually have to go through this. So in saying that: if these people and stories were misrepresented, not only would the original intent of this game be completely out the window, it could offend any of the audience who have any or more knowledge than I do on the subject matter. And as a game developer not only do I feel that it is my duty to represent content as accurately as possible.

I Want To.

In order to do so I needed to research. And research was done, but the rest of this isn’t going to be about the research. One of the main thing’s that I wanted to get across was that these people aren’t in the situations they’re in because of drugs. Because there are many more reasons. But unfortunately I can only get across the 7 that I have within the game because of a little thing called ‘scope’ and deadlines. I just wanted to address some of the possible reasons as to why this occurs. And bring to attention that they are still people.

Magoo was one of the people I ended up running into regularly. Funnily enough, it was at the bridge.

New Intelligence

As I’ve mentioned in a previous blog New Intelligence have asked us to make an app for them. Part of the process has been that they’ve provided us with the content and training required in order to understand what they do and what their content is. They ran us through the training course that they provide to their client base. The training course that they actually charge money to go participate in. They provided this training course to us free of charge.

Part of the agreement is that we are free to show our work but not give away the content for free. So in some further blogs that might explain bits and details of parts of the app we’re making, it will never ever be content heavy. Even though this isn’t in a contract (but I’d be happy for it to be) as a developer I wish to honor this agreement. It ties directly into ethical design, in the aspect that a company has devoted their time and their money into researching and creating a commercial product. Much like game development (in a later stage where I might be charging money for games or working on games what will have a commercial purpose), I’d like to respect step 3 of the IGDA Code of ethics and avoid giving away their course content for free.

Respect intellectual property rights.

Until next time –


1 Comment

Posted by on March 20, 2017 in Uncategorized


Freedom Through A Lens UI – Part 2: Selecting Photographs

This is part 2 for some of the UI that I developed in Freedom Through A Lens. This is about once everyone has been spoken to, the photographs have been taken, and the photojournalist is back at their office choosing which photos to use in their piece of media. I say a piece of media because the specificity of which is left open for interpretation.

End Menu 1

Everything here is UI. But like I discussed in part 1 this UI canvas is a child of another separate Camera and it’s render mode isn’t Screen Space: Overlay, it’s using Screen Space: Camera (this camera). The intent behind this section is that once the photojournalist got back to the office they’d have a single photo of all of the people who allowed them to do so. But the presentation of which, because there would be a small amount of photos, they would be scattered across the desk in a non-organized fashion. But scattered in a way that every photo wasn’t completely covered by another. Then once the player hovered over an image, it would move towards the camera enlarging it. If they selected the image, it would translate back to an already set spot. If they stopped hovering over the photo, it would translate back to its original position. There was a maximum of three photos that could fit on the piece of media. Once three photos had been selected – there would be a confirmation if these were the photos they wanted to use. If they chose to re-select the photos, all of the photos would translate back to their starting position.  The remaining images that had not been selected, moved off-screen. Originally as I set this up, there was always going to be 7 photos, 3 of which got chosen and 4 got discarded. When there were more photos not being selected it was more effective to watch.



All of this was done by Mathf.Lerp the RectTransform of UI images. The new part that I had to learn was that UI image transform’s work a little different to regular Transforms.

Transform And Rect Transform.JPG

Transforms always move from the pivot point of the object, not the centre. Which is the opposite with RectTransform’s, along with the syntax of how to move objects.

Transforms are moved by transform.position.

Rect Transforms are moved by rectTransform.anchoredPosition or anchoredPosition3D.

The anchored position is wherever the anchor is set.

UI Pivot.png

Anchored Position 1

Scene view – from anchor point to position

The four little white arrows are the pivot points. They’re moving in 3 dimensions from the anchor point (x,y,z VALUES) to the position of the set object (x,y,z VALUES). When the moving object’s anchor point is centre, and its x,y,z are zero, it moves the centre of the object to the x,y,z values of the set object. Whereas if the moving object’s anchor point is not centre – and for example is top left, and those values are zero. The zero values still lerp to the set objects x,y,z values, but in screen space the anchor point moves to a completely different location.

Anchor Points Comparison to Move to Values with red 1

Centre Pivot

Anchor Points Comparison to Move to Values with red 2

Top Left Pivot

Anchor Point 1

Game View

Anchor Point 2

Scene View

Anchor Point Position Lerp.gif

Side on view to see movement on the Z axis.

So all of this is being done where every photo has a manually placed object to move to. That move to object is a child object of the same object the photo is a child to.


So this is all done under the same parent object essentially. I’ve tried to make it so that the photo could lerp to a position of an object that isn’t in an immediate child transform of the same parent. And it doesn’t end up working well. I’m going to have an educated guess because when the object that was a child of the same parent moves to just be a child of the canvas, its x,y,z values get’s changed. And much like having different anchored positions – because the x,y,z values are lerping from their current state to the new ones, the on-screen position are very different. So unfortunately every photo has its own manually placed move to objects. Because they each need more than a single position to move to.

Hovering & Selecting

In order to hover over a photograph and initiate the lerp to position I wanted to use the Unity EvenTrigger that can be used just in the inspector.
Specifically PointerEnter and PointerExit.


When the cursor has entered the button: Move towards the desired object.
When the cursor has exited the button: Move away form the desired object.
The cursor entering and exiting the button toggled the moveTo bool. The bool is part of a script that’s attached to the button. The enter and exit accesses the script on that button and runs either one of two methods to turn the bool true or to turn the bool false.
The lerp operates from a 0 to 1 (0% to 1o0%) and the timer is just the percentage. If true – up the percentage, if not, down the percentage.



But then this happens

See the jitter? That’s because the mouse is entering and exiting the bounds of the photo in a very short time span. I needed to follow the image to stop it from doing this, and that’s not what I wanted at all. That’s also not what I wanted players to have to do either. So there needed to be some kind of buddy system. A zone that the cursor could enter to trigger the movement, but also so that if the cursor was within that zone or still within the photo the bool would remain true.

This slideshow requires JavaScript.

The solution was to have button (with a non-transparent image) that can’t be interacted with in the exact same position and size as the photo. This is the parent object. This is the master object that will detect the cursor enter and exit and have the script that controls the toggle of the bool.



The Fake area is the object with the event trigger and the script that has the method to get the ‘myButton’ objects script and turn its moveTo bool to true or false.
When I tested this out, even though I hovered over the fake area to trigger the move and while the mouse was still hovered over the photograph – even though parts of the photograph or the cursor weren’t within the fake area, the photo still lerped to the correct position.

I had the desired effect, but I was curious as to why it did what it did and how. I fiddled around with some things and discovered that because the photograph is still a child object of the fake area object the image that has ‘Raycast Target‘ bool is part of the system that checks for the mouse hover. With the image on the photograph that also has Raycast Target still parses info to the parent and states the cursor is still being hovered over by the mouse. COOL HUH?!


UI Child Index Issue

There was only one last problem to solve, and it was an easy one thankfully. Unity 5.5’s UI system renders the last child object first. I might not be saying that right, but the furthest child from the canvas in the hierarchy gets rendered first.

UI rendering

The solution is to just set the index of the object that’s being hovered over to a specific child index of this objects parent. In this case if the amount of object changed over time and this required some manual upkeep every now and then.


There’s more to talk about with this but I’ll conclude it at that for now.

Until next time –



Posted by on March 15, 2017 in Uncategorized


Freedom Through A Lens UI – Part 1: Main Menu Camera Flash Effect

Nic Staracek asked me to help him create a user interface (UI) for his #ResistJam game:

Freedom Through A Lens

You can download it here.

I’ll call Nic, “Staracek” for the purpose of these specific blogs, because my name is Also Nic and I might get confused saying my own/his name every now and then.

What Staracek wanted was a camera flash (as if you were taking a photo) to accompany transitioning between menu pages (among other things). I’m no artist, and neither is he. So we can’t easily (at the moment) make a super cool visual effect, image, shader etc that easily allows us to get the effect he desired.


We fiddled with a few options that could possibly replicate or do similar to the camera flash.

Test Camera Flash

Quick Test Option 1

Black Test Flash

Quick Test Option 2

We quickly tried (EMPHASIS ON QUICKLY) having a UI image and just scaling it up quickly. We though that no matter how pretty the image would be, it wouldn’t feel right. It has such an abrupt edge. Even with an image that had a softer edge still didn’t feel right. What about if there was just a solid screen overlay that was white and the transparency just faded in and out?

Test White Canvasgroup alpha

UI Image With CanvasGroup Alpha moving between none and full

It felt better but it was missing a critical element to how camera flashes operate. It didn’t have an origin point, it was kind of just everywhere. I thought about how light actually affects objects and the possibility of having a light in the UI.

Light In World Space.JPG

UI Light In World Space

But there was a problem. The light even though a child of the UI Canvas – did not affect the other objects. The light was positioned in coordinates relative to the UI Canvas and it seemed like the correct place, but from what I can gather, the light in physical space and UI Canvas images interact very differently. So without spending too much time wondering how we could make that work, I jumped to the thought of light touches physical objects, why not make the menu a physical object?

Create a 3D plane object (rotate it to face the camera) and drop the image sprite onto it so it converts it into a material. At this point the camera had a Perspective projection, it still cared about depth. I created a separate Camera and switched its projection to Orthographic and untagged it as ‘Main Camera’ to avoid any possible confusion with cameras later. The Menu camera no longer cared about depth (z-axis) and only cared about objects positioned in front of it on the X and Y Axis.

Orthographic Camera.JPG

Orthographic Camera

Now that the main menu is a physical object on a plane and the material on it effects how the light interacts with it, I fiddled with its material properties to adjust how it looked for the camera.

UI Image Standard FTAL

Standard UI Image

Physical Menu Object Material Difference

Physical Menu Material Properties Shifting

Another factor that also effected what the main menu object looked like was the skybox. Because in this instance, part of the lighting came from the skybox.

Main Menu Physical Object Skybox Lighting as Default

Main Menu Physical Object with Default Skybox as Lighting Source

Main Menu Physical Object Skybox Lighting.JPG

Main Menu Physical Object with Default Skybox as Lighting Source


This slideshow requires JavaScript.

Now that the menu image is a physical object that can interact with light, it needs a light that isn’t the directional light. A point light covers areas relative to its position without specif is the one I chose and gave it a ridiculous amount of range and cranked the intensity to max in attempt to cover the surface area of the image. Whilst also having it positioned so it’s origin starts where the camera flash would be.

Light Positioning

Light Positioning

The light alone on full intensity and a ridiculous range didn’t cover the image.Light 1.gif

Moving the light from the exact position of the image away had the relative radius outwards flash, but after a certain point the light just started to fade away, as a point light would when it’s reach is too far from the surface. So there needed to be other layers in order for this to work. The next step was adding a ‘Sun’ flare to the point light, to really get that light crankin’.

Sun Flare

Added Sun Flare

Anything on the -y position meant in was translating backwards in world space. If it’s position on the Y was positive, it meant it was ‘behind’ the object It’s supposed to be on the z-axis because that’s forwards and backwards, but for whatever reason it’s empty game object parent had a rotation of 90 on the X. Because it’s a child of a rotation, technically Y is up towards the parent transform, but in world space it was equivalent of the Z axis, so I just ran with it. It might have not been best practice, but hey, it was an 8 day game jam – sometimes it gets messy. Anywho, as soon as it was in ‘front’ of the main menu image, the flare was really up in your face. Turns out though, the further away from the object it was, the less the flare was apparent. It was doing the reverse of the point light alone. The intensity of the light had to be decreased too, because 8 intensity with a sun flare was melting my face. 0.87f seemed like the perfect amount of intensity in combination with the flare. The light completely dissipates at very close to -100, and as the closer it gets to that number, it hits a point where the flare starts to create a flash like light right where the flash on a camera should be.

Light 2.gif

The movement that ended up being the most appropriate was from far away to then move closer. Now it’s at the stage where the light and the flare alone still aren’t covering the entire image. So in combination with the flare, I cranked back up the intensity of the point light to 8.

Light 3.gif

From -100 on the Y axis, to -20 on the Y is the most effective positioning for the light. Anything closer to 0 from -20 didn’t have any effect of the coverage of the image. What else makes stuff brighter?


Light 4.gif

Bloom From 0.1 – 2

In order to reach and cover the rest of the image, bloom intensity had to be 2. With everything in combination with each other, this is what created that flash effect, on a physical object in space. No matter the distance of the image to the camera. But the distance of the light relative to the object in space is what mattered. I had all the components I needed to make this camera flash, the next step was to automate it. Do all of the aforementioned in a sequence to flash up, and then flash down.

The Light had to have a -Y distance of exactly 100, the intensity of the light had to be exactly 0.87f in combination with the flare, and the bloom on the camera had a maximum of two. The solution was to

  • Mathf.Lerp the transform position of the light (from -100 to -20) at a particular rate that made it imitate a camera flash (this took some iteration).
  • Mathf.Lerp the intensity of the light from 0.87 to the maximum of 8 IF the lights position was -40 or closer.
  • At the same time as blasting the intensity, Mathf.Lerp the intensity of the bloom on the camera from 0.1 to 2.

Then once that worked, do the exact same in reverse.

So now that the ‘Image & Flash’ for the main menu had been set up, the menu still needed a UI canvas to interact with, but this time instead of defaulting to Render mode of

Screen Space – Overlay, I only wanted this UI canvas to work with this specific camera.


After wiring up the menu, the only thing left to do was to slot the transition of what we needed to happen
between when the flash was at its maximum and right before it started to flash down.


I’m really proud of this. And this is only part one of a two part series. The next blog talks about moving Rect Transforms and automated sequences.

Until next time –



Posted by on March 13, 2017 in Uncategorized


Serious Game – The Presentation Of Information

So what is a serious game? A serious game, is a game that’s sole purpose isn’t to fill in the blanks in someone’s leisure time and entertain. It stands to deliver some form of information or training – “induce some kind of affective or motor learning (in a broader sense)” (Susi et al. 2007, Breurer & Bente, 2010). As opposed to an educational game where they’re more so designed to help learn about particular subjects or concepts.

Looking at what New Intelligence does and their client base, I think it would be inaccurate to assume that we’d be making a serious game or some sort of educational game for a younger audience. The audience that this game will be targeted towards looks very specific. For a target audience such as New Intelligence’s client base what types of things could we look at that might help achieve or inform our game design? Ultimately because we still don’t know the problem that this game should be solving we can’t exactly look at specific things, so the search is going to have to be broad. What about playing some serious, edutainment or educational games and looking at how they present information?

We looked at Papers Please

This slideshow requires JavaScript.

And here’s a good game play video where they describe what’s going on and their thought processes throughout playing the game.

Papers Please is working at border security/customs and sifting through passports and analyzing people to see if they fit the criteria to enter the country or not. I am not a big fan of reading, and at the start of playing this game I approved everyone until I started to get warnings about letting people through that had wrong information. It gave me consequences to not caring. You get paid daily and have a family to look after. Income is based on how many people you correctly permit entry to. It slowly introduces new things that you have to be on the lookout for. One of the interesting things that I found with papers please was that some of the people would try to distract you from reading their documents with conversation. So there’s one way of presenting information.



L.A. Noire is an action adventure detective game. An example of presentation of information that we looked at was with some sit-downs during an interview/interrogation. Having characters or people sat down face to face. This has a very large amount of spoken dialogue. It also has very high end and expensive animations (the type that we don’t have the time or the money for). I mean can you see her facial expressions?! There’s also the subtitles of the spoken dialogue and options to choose about how you feel their answer was.


Along with a prompt of known information.




Duolingo is a language learning platform. In the video above they’re trying to learn english from their native language of Spanish. I’ve spent a little bit of time on duolingo now and I’ve finished the first 5 sections and started the 6th, whilst also going back and building up the strengths of some of the topics that have weakened over time.

One of the ways they teach words is through presenting the phrasing of what needs to be translated, and pictures of representations of the word to be translated. Although only one is correct the other two images have always been relevant and seem to be introduced either in the same lesson or in a future lesson. So to me having an image to associate with words instantly enables me to retain and identify those words in a much easier, and faster fashion.

This slideshow requires JavaScript.

It also presents translations in pure text forms. You are able to hover over the unfamiliar languages words and get multiple ways that those words can be translated. And like I previously stated, although “la manzana” (the apple) wasn’t used in lesson one, it was the first question (and also an image question) of lesson 2. If you get an answer wrong it puts it to the end of that lesson and you get to (repeatedly) try and answer it again.

This slideshow requires JavaScript.

It also presents a ‘fill in the blank’ type scenario with a drop down of words to select from.

My Duolingo.jpg

My Android Duolingo App


I’ve been learning Spanish on my android phone and the above is my progress. I often find that if I’m opening duolingo and one of these strength bars isn’t full, it’s a great incentive for me to go back and reinforce what I’ve learn. It’s a great motivator, and I also believe it’s a very reasonable concept to deploy in a serious game. If it is to teach something, giving a visual representation of progress or strength can be a strong motivator.


This is also a photo (of the app) from my phone. Can you see anything wrong with this picture? The options to choose from presents the first word of the sentence with a capital letter. It’s a dead giveaway. There’s a few approaches that you can take away from knowing the first word to the sentence. One could be, happy because you had no idea what the word is and this is a prompt to learn, or (like me) slightly frustrated because you are genuinely trying to learn the words and you’d rather learn through making mistakes.

Without going on to explain every detail of Duolingo, there is many concepts that we can easily extract from this. And they definitely aren’t restricted to only being applicable in serious games either, this can be transferred to (my) normal game development techniques in trying to teach players, provide motivation or present information. This also applies to some of the other games that the group and I tested.

Without knowing what NI wants us to solve, looking at how other serious games present information, when, in what order, what this information looks like and how we interact with it has still been vital research into understanding ways that we could also reflect such ways into our app. That’s not to say that there aren’t any other games out there that can inform our design, because plenty of regular ol’ entertainment games have an abundance of information jumping off the screen. Besides, if NI didn’t want a game, they wouldn’t have asked game developers to help them achieve what they want.

Until next time –



Posted by on February 12, 2017 in Uncategorized