Lanterns – Teaching Variable Jump Via Gameplay

Lanterns is a 2 player local co-op platformer, originally made for a 5 day game jam. Now the team (Ash Stevens & Daniel Koitka) and I, are continuing to work on this and take it from a prototype, to a proper game.

Since continuing development on Lanterns, we’ve made a few changes. But the one that will be focused on, is the jump. Originally the jump was a static “press” and always reach the maximum height. Each level and puzzle design was based around having this static height that the playable character would reach when jumping. It’s been changed to have more of a variable height, dependant on the duration of the press. The longer the button is held, the higher (until maximum) the playable character goes.

Variable Jump

Variable Jump

A bunch of similar references can be found here: (courtesy of Ketura).

Because the jump had been changed, there were consequences that needed addressing. Previously with the static height jump, we primarily had to account for horizontal disposition of level elements. Vertical was less of an issue because Lanterns wasn’t focused vertical platforming, and more-so on using the static jump to platform horizontally and navigate vertically.

But now that the variable jump is properly integrated, we’d have to teach players that this is actually a thing that needs to be considered. Because the entirety of Lanterns control scheme is taught in the initial screen and players get to learn how to play as they play, I wanted to avoid separate tutorials or UI.

Lanterns Title Screen.gif

Control Scheme

Once players knew the 3 controls (but 4 buttons) of how to interact with the game, the rest was up to teaching them what to do and what those 3 controls do. It’s all taught via game play. So why not teach the player about the variable jump through game play once they’ve gotten the fundamental basics and opportunities of how to play Lanterns.

Teach Jump

Exaggerated Lantern Size

Teaching Variable Jump

Variable Jump Tutorial Playthrough

The image above shows a the setup to teach players about the variable jump. The first stage of reaching the door requires the player to press and hold the jump button for a duration that allows the playable character to reach maximum jump height. If pressed and held for a duration that doesn’t allow the playable character to reach the appropriate height, they wouldn’t be able to reach the platform to continue. If they succeeded the first time, it could have been that they only ever held the jump long enough to reach maximum height. If they didn’t, it would become apparent that something different needed to be done. There is no alternative way around the first hurdle. The second stage requires the player to adjust the duration of how long they press and hold the jump button for. If pressed and held for too long and attempting to go through the small gap, the result from touching the spikes would enforce that what you just did – isn’t what’s required to pass through. If pressed and held for not long enough, they wouldn’t be able to reach the platform to continue. The third stage is the same, experiment with the duration of the jump press to make it through both of these spikes. Too long – wrong, not long enough – wrong.

Doing the exact same jump in these different instances would yield and show players different results. You can’t progress unless you do something differently.

Collectively, the three hurdles forces players to experiment with the jump revealing that different interactions results in different results. Alternatively, if known by the player can be reinforcement that the variable jump is a thing.

Lanterns was recently exhibited at Netherworld in (Queensland, Brisbane) Fortitude valley, with the variable jump in the current build. The level shown above was not. But every group of two played either almost all of the 17 levels, or all of the levels.

In my head, the layout of this level represented and executed it do to what I wanted it to, but I knew the mechanic existed and how to interact with it. I’ve gotten two separate people to play this specific level (so far):

  • My brother – who does not play video games, and lacks hand-eye co-ordination.
  • His partner – who has a history of playing video games, but now plays very infrequently.

I introduced them to the necessary elements of the game that they needed to know to play Lanterns. I did not tell them how to play or what to do. We reached the variable jump tutorial level.

My Brother: took approximately 3 minutes to complete the level. The cause of this duration is purely because of the lack of experience he has playing games and being able to make the playable character do what he wanted it to do. I could see that he almost instantaneously understood that the jump was dependant on how long he was pressing it for. He just required a little extra time to get his hand-eye co-ordination fired up.

I asked him “What did you do to get your character to where it needed to be?”.
He said: “I just pressed the jump button a little less or a little more to make me jump higher or shorter”.
Because of his lack of video games, mechanics, and the “norm” of what video games require players to do, (I asked and confirmed that) he wanted to replicate what jumping is in life. The more effort you put in the higher you go, but translated that effort into the duration of the button press. The mechanic allowed him to do what he expected it to do. So it felt intuitive to him automatically.

His Partner: finished it fast enough that I didn’t need to think about worrying about how long they took.

I asked “What did you do to get your character to where it needed to be?”.
They said “I just jumped”.
I asked “Did you do something differently with the jump button?”
They said “No”.
I asked “Were you aware that the jump operates differently depending on how long you press it for?”.
They said “No”.

To me, understanding their history of playing video games, they were consciously unaware that there was a difference, but subconsciously aware that there was. Either that, or it was pure luck. But watching them play throughout other levels that do not “require” the player to utilise the variable jump, they were still using it, constantly.

So the results from two types of players from a very large spectrum of types confirmed for me, that this specific level achieved/taught what I needed it to. That’s not to say that it’s perfect and accounts for every player type/capabilities, but the spectrum of player’s at Netherworld didn’t seem to struggle with the variable jump through game play. The next step is to create levels where the variable jump is required in order to progress.

This might not be final, but it’s definitely a start.

Until next time –


Leave a comment

Posted by on September 23, 2017 in Uncategorized


G.I.A.N.T.S Post-Mortem Part 1


Guests In A New Time and Space

G.I.A.N.T.S is a 4 player couch co-op puzzle game. It can be downloaded here:

Created by (in alphabetical order):
Animation & Assets: Kerry Ivers, Macauley Bell-Maier, Peter Buck.
Audio: Ash Ball.
Game Design: Adam Crompton, Nic Lyness, Nicholas Staracek.
Game Programming: Caleb Barton, Jack Kuskoff, Pritish Choudhary.


Tiny Sail Games + G.I.A.N.T.S

@tinySailGames + @GIANTSvideoGame

The original group of developers and I created Tiny Sail Games! This was at the start of conception of G.I.A.N.T.S. We wanted to start something that’s allows us to be a team, not just another project that brought us together. But it could potentially roll over into the foundation of building a core team. Creating G.I.A.N.T.S has been as 6 month journey, the longest game development haul that the entire team and I have ventured on. The first 3 months were pre-prodcution – conceptualizing, planning, requirements gathering, documentation, art style and finding art reference images. conceptualizing a game based around one core experience which we all decided was a local co-op where friends or strangers come together to have a good time, connect and create some memories. Have you ever played ‘Halo’ local multiplayer and right as the screen fades to black, someone presses that B button for the eightieth time to cancel? Memories…

MIB Neutralyzer.gif



Feature Cut:

In the pre-production phase of Tiny Sail Games’ first project G.I.A.N.T.S, we ended up having essentially 5 core mechanics which got narrowed to 3.


Tethering (a tether) is a rope between the players and EHU (Extraterrestrial Harvesting Unit) simulating rope physics. Which is a result of players entering a proximity of EHU or inserting crystals into her (EHU).

Core 1

Crystal absorbtion:

  • Swell – Makes the playable character larger, stronger, heavier, slower.
  • Shrink – Makes the playable character smaller, weaker, lighter, faster.
  • Stretch – Cut
  • Sticky – Cut

Core 2


There were four different types of crystals. Each of which (when inserted into EHU) gave the player who inserted it a power up. Each crystal has a unique power up that it holds within and only applies it to a player via tether (through insertion into EHU).

Tethers are still a core mechanic along with inserting and absorbing crystals through EHU. What changed was, we removed two crystals and their abilities – stretch and sticky, what would have been green and yellow. Sticky would have been that anything the playable character touched, would stick to them and allow that player to stick and roll around on walls or the roof. Stretchy would allow that player to latch onto fixed anchor points in the world and stretch their character, fling objects or players or act as a bridge.

This resulted in not having to create two separate textures (big whoop) but mainly it reduced the amount of subsequent dynamics we had account for by ALOT.

Why did we cut them?

Short story – Scope.

Long story:

The Tiny Sail team took inspiration from Gang Beasts and Human: Fall Flat and were going to pursue physics based movement. Physics based movement alone, not physics based interactions. In a local couch co-op situation, who doesn’t love bursting out laughing based on the ridiculous things that happen to your physics based playable character and buddies? It wouldn’t have over complicated game play but just made the non-core interactions a little more goofy and pleasing to play with. It turns out that physics based movement isn’t that hard. But in combination with game play elements and other mechanics on top of what we needed the sticky and stretchy abilities to do and how they interacted with every other system in the game, it blew the scope way out of proportion. A very small summed list of the things we needed to think about:

  • Physics movement + mesh stretching working.
  • Mesh stretching and interacting with colliding objects.
  • Physics interactions of objects within stretched mesh and physics of stretched mesh.
  • Physics movement + stickiness + wall climbing + object weights + .
  • How camera will the camera account for 2-4 players within an enclosed or open environment that can be positioned on the floor, walls or roof whilst keeping an orientation relative to the objective.
  • Physics movement + any combination of crystal power up + any possibilities of puzzle elements + 2-4 players and how they each interact with any of the aforementioned.
  • Level design consequences and puzzle design choices that we have to account for with the combination of all of the above.

Not to mention the number and severity of issues that we would discover along the way. Getting it to work and not be buggy is a massive task on its own, let alone within the given time frame and with the experience that our team had dealing with a game or task like this. The physics based movement was going to allow stretchy and sticky to really shine. Without it, it would be good, but not great. By only having swell and shrink as the power-ups, the physics based movement would still be a great addition but not a necessity.

Super Complex Math.jpg

This is what it ended up feeling like.

Although all of the above points I’ve mentioned are just a part of game development and I would actually love to work through those, there was no way that it was happening in the development time frame of G.I.A.N.T.S.


We could happily walk away from it and focus on other elements of the game. The shrink and swell elements of the game could still be used in so many interesting ways by themselves or in combination with each other and with different amounts of players. Rather than creating new mechanics to create interesting dynamics, utilise and expand on the existing ones. Don’t be afraid to feature cut on initial ideas, sometimes less is more.

Cause Analysis

Future Lessons

Using FMOD for Audio:

Mute Unity Audio

Mute Unity Audio

For the entirety of G.I.A.N.T.S we used a program called FMOD. This is what Ash (Our audio guy), and myself know how to use because we’ve worked together on multiple occasions. Mindstate was the first game we integrated FMOD with Unity and that’s where we discovered most of the hurdles and issues we’d need to tackle for further projects. That was definitely very worthwhile because it opened up avenues to learn foundations of external audio applications. I was solely responsible from getting all of the audio in G.I.A.N.T.S and collaborated very closely with Ash. There are plenty of benefits to use it rather than the standard unity audio setup. As an example: This is a single sound split into two sections, EHU hovering and EHU locking into a node, that are both comprised of different tracks with different transitions and more other fancy audio stuff.

EHU Lock Node.PNG

EHU Lock Node Parameter.gif

Rather than having multiple sounds and having to worry about code and timing and transitions it’s all handled by a single event. It starts by playing the “EHU Hover” sound and enters the “Hover Loop” zone and loops repeatedly until the “LockNode” parameter knob gets set from 1.00 to 2.00. Then the transition from playing the hover sound to lock node sound starts and plays.

Why did using FMOD go so well?

Ash and I had collaborated multiple times (Transmutation, The Ride, Mindstate) before the start of this project. By now, I’d already learnt most of the foundations and necessary actions required to get FMOD integrated into Unity and make it do what we needed. Most of the hard learning curve had already been conquered. Ash knew exactly what he needed to do to make the sounds game ready. There is a large distinction between sound design and making these sounds game ready. As an example: like what was mentioned above with setting transitions, and looping areas of sounds with exposed parameters that also have appropriate naming conventions.

There were only two things left to do:
Learn how to get VCA, which is relatively similar to getting other parameters.
Make FMOD sounds playable via code.

We planned for it.

From the get go, it was decided that we would use as much audio through FMOD as possible. We knew the capabilities of FMOD, we knew what it allowed us to do, and between Ash, Staracek and myself, we decided a little big more leg work in the earlier days would save us a whole lot of workload down the track. FMOD integration was my third commit on starting work on G.I.A.N.T.S. Setting up foundations that allowed workflow and audio integration process to be much easier and faster.


Audio from the get go.

Audio was (and is always) just as important as the rest of the game. I didn’t leave audio integration until the last second of the project. It was an ongoing process over the entirety of the project. In which I set aside a specific amount of allocated workload each week to make progress on audio as it became readily available. Tackling each problem as it arises. Plus once everything was wired up with placeholder sounds all ash had to do was overwrite the old sound with the new one, re-build, it’d be updated. I’ve left audio until the last-minute before, and it didn’t go bad at all, in those instances. But those were very small games in comparison and there was much less margin for error. Having feedback sooner rather than later was always going to be good, especially because we wanted to put this in front of players as soon as possible.

I’ve had an experience when working with ash before (when I wasn’t in charge of putting in audio) and a lot of his efforts and assets went to waste because the person who was in charge didn’t put in the effort or allocate appropriate time for audio integration. Not only was there a lack of feedback, I can only imagine it felt like a kick in the teeth to Ash. Just as if I created content for someone or something and majority of it didn’t make it into the project due to lack of effort or appropriate priorities. Seeing this, has made me take it on board as if it was a problem that ‘I’ caused and didn’t want to see happen again. Causing me to plan for future endeavours which allows myself, or anyone on the team/project to not have to go through unused content.

Audio Checklist.PNG

Future Lessons

I can’t say “have a constant audio team member that knows what they’re doing”. But what I will say is:

  • Continue to plan for audio with the same amount of importance as the rest of the piece of media (not game specific).
  • Plan where your audio is playing from, is it housed in a game engine or through an integrated program?
  • Plan for each of the sounds that your piece of media will need.
  • Document each of the sounds that your piece of media will need.
  • If more than one team member – document who is in charge of this specific asset.
  • Have a checklist of some sort whether it is complete, in progress, or not complete.
  • Document if that asset readily available for you to view. Document where you can find this asset.
  • Document if it is implemented in your piece of media. Just because it’s implemented does not mean it’s finished.



G.I.A.N.T.S required a rather complex camera system that we greatly underestimated. W Camera was the largest issue throughout the entire development process. It created major hurdles that hindered progress on getting the core game loop. Having the camera issue created a cascade of other issues which were all solvable by having a working complex camera system. We pursued a similar style of progression as INSIDE where the game has a constant flow of environments through puzzle play. So rather than loading in different levels and puzzles separated by load screens, the transitions never took the players out of the game.

What caused this camera problem?

In pre-production we didn’t know the exact systems that the camera would need in order to operate. We underestimated the amount of work that had to go into the camera to account for all of the possible things we needed it to do. We didn’t exactly know what we needed it to do. By the time production came around and we started to get elements of game play bit by bit, we started to fully understand how this game is going to be constructed and played and what we need the camera to do. We didn’t get the camera prototype to a stage where it even remotely started to do what G.I.A.N.T.S would need it to.

We had a specific team member who was allocated to getting the camera system working and implemented. They did not deliver or make anywhere near enough progress by any of the deadlines. We had to switch team members on who was working on other systems onto the camera, causing the other systems progress to come to a grinding halt. My point of this isn’t about playing the blame game, it’s to reflect and learn to have contingency plans if production elements go awry.

Future Lessons:

Research and explore other types of games that have similar elements of game play, pay attention to what the camera does, make notes, try to extract relevant information that will help design the system we need. Try to replicate small doses of functionality. Plan how these little elements will tie together into a whole cohesive system. Essentially rapid prototype before production phase.

Have a contingency plan (which comes under risk management). Decide what tasks take priority, what gets cut or how the production pipeline changes if an element needs attention or won’t get done.

Stay tuned for part 2 of the G.I.A.N.T.S post-mortem where I discuss topics like

  • Being the project manager of 10 people (which is large for a student team).
  • Asset Management within unity, prefabs, mesh’s naming conventions and the maintenance it entails.
  • Weekly meetings within collaborative space. Working in a collaborative space.
  • Tiny Sail Games Discord server and the purpose it serves and how it operates.
  • Being the social media person for G.I.A.N.T.S and Tiny Sail Games.
  • People Management.
  • Teaching the animators how to be unity competent.
  • Teaching non source tree familiar team members how to use source tree and not make the repository explode.
  • Explaining how non source tree familiar team members made the repository explode. And how Caleb Fixed it.
  • Particle Effects!
  • FMOD Scripting.
  • Documentation – Including bug lists, schedule, project breakdown.
  • Why initial shipped product was lacking level 6 and 7.
  • How the game ended up not being 2-4 player friendly and is only 4 player friendly.

Until next time –



Leave a comment

Posted by on August 25, 2017 in Uncategorized


Uncharted 4: A Thief’s End Hey… are you happy with this life?

Uncharted 4: A Thief’s End Hey… are you happy with this life?

Nostalgia Lab

I love you. But this isn’t who I am.

Uncharted 4: A Thief’s End is generally valued for it’s technical, graphical, and cinematic achievements, as were the entries that preceded Nathan Drake’s final outing. I’m not going to talk about that. While that discussion truly deserves our attention, I want to focus on, and talk about a singular moment that occurs towards the start of the game, that doesn’t derive from any technical, graphical, or cinematic achievement. A moment that is truly left in the hands of the player. A moment that is remarkably rare, and I use that word in it’s every sense. I want to talk about why, in spite of everything it will be succeeded by, this moment stands out and serves as an anchor for understanding the human qualities and topics hidden throughout this interactive piece.

Let’s try to explain the context as swiftly as possible.

View original post 1,525 more words

1 Comment

Posted by on May 28, 2017 in Uncategorized


Post-Mortem – New Intelligence (Studio 3)

Over the past 12 weeks the team (Adam Crompton, Joshua Textor, Nicholas Staracek) and I have stepped through the entire process of creating a mobile app for New Intelligence. An app for their client base to have an opportunity to practice the things they’ve been taught. As a first time working with and creating an app for a real commercial client, it was far from perfect. From our perspective it’s not ideal, and from New Intelligence’s perspective it probably isn’t either. But I’m glad it didn’t run perfectly. Because it means we have SO much to improve on and to learn from in the process.

In twelve weeks, the expectations and scale of this undertaking + some other inherit tasks that followed were:

  • Learn to work with a commercial client.
  • Learn their content (of interview techniques).
  • Remember their content.
  • Practice their content.
  • Research serious games and how they can present information in meaningful ways.
  • Learn how to use their content effectively in way’s New Intelligence agreed with.
  • Create content.
  • Learn how to present their content in meaningful ways.
  • Implement content.
  • Collaborate with the commercial client.
  • Create a survey that asks content related questions in a meaningful way to the commercial client’s clients (end users) to gather a very broad (but oddly) specific answers.
  • Get results and analyse results from testing.
  • Implement changes based on survey and testing.
  • Learn how to do this (effectively) on a mobile device. + Orange
  • Learn how to deploy (effectively) to multiple types of device. + Orange
  • Get this on Android and IOS marketplace.
  • Documentation.
  • Actually make the app and all of the game development processes.
  • Still do university.
  • Do other university classes.
  • Not to mention more that I’ve overlooked or haven’t written down.

The orange are thing’s we’ve never done before.
The purple are thing’s we would most likely struggle with or are time-consuming.
There is a difference to me between scope and scale of a project. Scope being the app and it’s content (in this context), the scale being everything I mentioned above. Don’t get me wrong, I’m not complaining about what was asked of us. But with the aforementioned I think that for 12 weeks, the scale of this project was drastically over sized. Especially considering the quality I wanted this to be and taking into account the things that we had to learn in order to make this a reality. Not that any of this isn’t do-able, just not able to be perfected with the time-frame given.

Project Management Triangle 1.png

Project Management Triangle

We weren’t and aren’t getting paid for our work on this. The “scope” for this project is “scale” instead, and time is the 12 weeks.

NI Project Management Triangle.PNG

The red is where I believe the points on the scale are relative to cost, scope and time. The black is a guesstimate of where the average point of them are. The blue is where I’d wanted it to be (cost irrelevant). But as the next picture describes, the area that the point would end in, doesn’t exist.

Quality Triangle

There is no hybrid of all three, it’s always 2/3

What went right.

Communication between designers.

Throughout the 12 weeks of development of the New Intelligence app all of us designers (in my opinion) could not have communicated better. Whether it was over discord voice chat, leaving messages in specific New Intelligence related discord channels, updating documentation, emails or collaborating in the work space, there were no gaps.


For a number of reasons, one of which is having very specific work times where we were all in a collaborative work space – University. Enabling to communicate in the most meaningful of ways, face to face for a decent amount of time. Another reason, but also still as important, because we all care about what we’re doing to actively pursue every necessary opportunity to push forward to the best possible outcome. I’d also like to believe it’s because we all enjoy communicating with each other, whether it’s about the project at hand or every day chatter.

What would we continue to do next time?

Continue to separate work related communication channels from anywhere that can have distractions. Don’t have personal conversations within work channels and vice versa. Don’t have work conversations in social media platforms where cat videos can easily get the better of you. This is one of the first project’s where we had very specific channels that only the New Intelligence game development crew had access to. Keep the conversation to work related only, and only give access to those who are on the project with you. Voice chat > text chat always. Face to face > voice chat > text (if possible).

Design and Concepting.

The New Intelligence app wouldn’t be what it is if there wasn’t dedicated time set aside to figure out what it is and to concept how to use content effectively. Here’s a blog I wrote about it.


Going back to the roots of rapid concepting and rapid paper prototyping was a big contributor to this. Pumping ideas out on pieces of paper that allows us to move on and do this over and over to get a collection of ideas but then to also have a physical pile of ideas that we can reflect on.

What would we continue to do next time?

Do the same as mentioned in the blog. Design concept, Paper concept, paper prototype. Don’t just dive straight into unity. Dive into unity when you have something worth prototyping. Don’t build from scratch and fumble in the dark.
Also: USE A GIANT WHITEBOARD OF A WALL. I can’t stress how useful it is to have giant wall to draw on. Big enough for a few people to all draw at once. It’s like having a physical 3ds Max, Unity, Source Tree, Google drive, Paint, where everyone can sculpt pathways and plant trees. Take photos of it, put it in a shared google drive for everyone to reflect on.

New Intelligence received and app.

They got an app. We Delivered. Although it might not be to the original scale that was expected of the team, the project was still delivered, and in the form of an app.


We followed a project schedule and a project plan. I wrote about project methodology in this blog here, which helps define that this project plan was in no way linear. It was incredibly hard to account for the amount of possible changes or challenges to occur, although I’m sure this is just called game development. The reason why we were still able to push through was because of this project schedule and project plan. That and treating this as a real job. You don’t go to work to not work right? Well at least in my mind I go to work, to work. This opportunity had importance and outcomes, the team and I treated it accordingly.

What would we continue to do next time?

Continue to have project methodology. Know it, understand it, plan it, document it and use it. Have a project manager to enforce project schedule and project plan.

What went wrong.

GDD Documentation for programmers.

This is the most critical thing that went wrong with the project. Although the designers and I were pretty on top of the documentation, there was one flaw within the GDD. We had documented for 1 of the 3 types of question types. The functionality of the question type and how it operates within the app, how user’s interact with questions to achieve the desired effect. The three different question types each had different functionality. Because they had different functionality they needed to be presented in different ways. There were different amounts of information that needed to be presented in different sizes and places. Somehow in production it was decided that we we’re procedurally generating questions. I think we were trying to go for procedurally generated UI is so that the system would be infinitely expandable and it’s just self handling. That would have been fine if we had the time to do so. In this instance I don’t think we did.

We spent 3 weeks waiting on programmers for the system to have been created to procedurally generate UI and questions and all of the bits and bobs so we could start pumping content in and for it to self handle. In those three weeks us designers were doing other things, it wasn’t just 3 weeks of downtime. But we (kind of) hadn’t even opened up Unity until this point. It came to the point where the programmers said “Yup this is ready for you to do what you need to, here’s how it works, aaaaaand Go”.

Because there was so many different types of answers, and sized answers and structured answers with ever-changing amounts of text and the location, how the programmers handled it worked but it wasn’t user-friendly at all. It wasn’t easily decipherable or customisable. We couldn’t put any of the content in the app and have it user-friendly, in the right location, size. Every UI panel was different and we couldn’t possibly procedurally generate everything we needed and have it formatted, sized and placed properly in order to be coherent. I’m not saying it can’t be done. It just couldn’t be done to the level of expertise we needed it to in the time frame given. The solution was a hybrid of the designers manually constructing UI panels and attaching correct functionality to where it was needed. Then for those panels to be instantiated under the appropriate conditions of activity and exercise. It’s a lot more manual labour, which is what we were trying to avoid. But it was a necessary evil in order to get where we needed to be.


There were gaps in the GDD. There were gaps in the GDD because in the entire development process, it got a bit messy, focus changed and some things got left behind, some things got lost. This comes down having such a large-scale project in such a short time frame. Part of this was also because there’s so many different new things we were trying to learn how to do at once distracted us from the things we already knew how and what we should do. This isn’t an excuse but rather a point to reflect on that lack of experience with so many areas affected part of this project.

What would we do next time?

In the project schedule have a specific iteration time for documentation. I can’t specifically state where in the timeline that this went awry. I can’t and won’t point fingers, but if this process was implemented and every member on the team went and did iterated over documentation weekly I’d like to think that there’s a strong change that this could have been of less impact. Isolate specific areas that we don’t know how or what to do, gather and practice knowledge that allows us to not fumble in the dark.

Writing a questionnaire.

We ended up spending almost a whole week trying concepting and coming up with very open questions that aren’t very specific but oddly evoke a very specific answer that we can extract an array of information from. It was dragged out a little longer than what it could have but this was going directly into the end users hands after it went through NI. The downfall was that the questionnaire never got sent out. So we never got any answers back that would have helped inform the design of this app.


This is completely out of our control. New Intelligence had the list of their clientele, and for confidentiality reasons we did not, nor would we or could we gain access to it.

What would we do next time?

Email New Intelligence so frequently asking them to send the survey to the point where they get so annoyed that they actually send it just to shut us up.

Some other side notes.

Intertwining two projects into the same repository.

This wasn’t terrible, but having so many members and not so many scenes we wanted to avoid conflicts with merges, Designers had 1 project, programmers had another. It was in the same repository (repo). Because the programmer repo was where all of the magic was happening and I/we weren’t exactly familiar with the layout of their project/hierarchy. The ‘designer’ repo that I set up for the designers to use was where all of our work went. I guess it was basically me just testing different things out as a ui designer, getting a feel for what the app could possibly look and feel like / layout etc. And by no means did I think this was going to be the repo that the app was getting built from. I was dabbling and implementing different ideas in my own time to test things out before production on the app actually started. By the time production actually started it was inside of the programmer repo. The ground work that I had done before was still good content and was going to be transferred into the programmer repo. The designer project wasn’t a complete app, but had the foundations and templates of how the app could/might work/look. It came to a point where we transferred the current progress of the designer project into the programmer project. We ended up importing individual prefabs of activities from the designer project into the programmer project. Packaged it and put it into the programmer repo.


There was a few problems with UI scaling and when the panels got instantiated the entire panel didn’t scale or position correctly on different devices. A large portion of this was from having prefabs that we exported through packages (from the designer project) with specific anchoring points/stretch position etc. Because there were so many people working on this project. In source tree somewhere every time the prefab was overwritten to correct the prefab and or the ones in the scene, somehow they were getting overwritten. It was a never-ending loop of tracking down an anchoring position for a while in a meta file or something rather? Luckily, eventually a source tree wizard was able to track it down and we halted production on all the prefabs, deleted old ones from the projects and refreshed to only have the up to date ones and push those.

UI Design.

On this project I was the lead UI designer. Learning specific things from – Steve Krug’s: Don’t Make Me Think.

Having specific buttons always in the same location. Easily Navigable. Make things that need to be pressed “Buttons” look like they can be pressed. Make things that don’t need to be pressed look like they don’t need to be or can’t be. Headings, always let the user know where they are. Tell them, show them how to complete activities and how to play. [I’ll link to a blog post here about the things that I’ve learnt and the traits that I’ve carried over and implemented].

Working with a client.

It was daunting but also thrilling to work with a commercial client. Getting to meet real people who work for a real company that have sought our area of expertise to help them achieve a desired output is a real confidence booster. Having those initial meetings where we lay the foundations and approach the scope of the project is something that I look forward to doing again in the future. It’s so unfamiliar yet so intriguing. Then to go on and go through their interview technique training course made it surreal. They’re actually investing in us with their time. In this case we didn’t get paid, but they spent money and didn’t ‘make’ money to provide the training that they did. We walk away with that knowledge and also had the opportunity to work with a real commercial client whilst we were still university students. Ultimately there wasn’t as much contact with New Intelligence that would have liked to have, but it’s still one part of the spectrum on how contract work can go. When we did have contact, they were always more than supportive and invested in what we were all trying to achieve. They had opinions and expertise and they weren’t afraid to express it, but did so in a professional manner.

Designing for content to go into an excel spreadsheet.

The back-end systems were being designed by the programmers. We were to design content and put it in very specific ways in an excel spreadsheet that they would be able to do their wizardry on and sift it back into unity. The point of this was so that if we ever needed to extend or change content we wouldn’t (or if other people worked on it, they wouldn’t) have to sift through different parts of unity to try to find where it needed to be changed. It would all be within very specific areas of documentation. It also aimed to be self handling so in the long run it’s a lot less manual labour.


Adding content into an excel spreadsheet and formatting it appropriately for the programmers to export into an XML spreadsheet and read into through Unity.

Opening Unity a little earlier.

I wish we Opened unity earlier as a team, and got to fiddle with different aspects sooner. Such as UI layouts and starting to discover all of the fires we’d come across later in mobile development because there was quite a large portion of this project that we had never done before. Testing out reading specific elements from the XML and being able to format it right and position it right. Understand that there’s so many differently sized information that a one size shoe wouldn’t fit all. But how could we possibly have known that we needed to open Unity earlier instead of honing in on the design concept? It’s like a double-sided sword. Open Unity too early and design suffers. Open Unity too late and the fires have already been burning. Wasting the week waiting on the questionnaire could have been used more effectively. But again, how were we to know? This project has ended from being solely on us to complete in 12 weeks to – hopefully – a bit longer of a project that others might get to pick up on and continue with. Or New Intelligence might want to keep pursuing this with us at another time? Who knows what the future holds.

Stepping Down.

This is one of the first projects where I’ve stood down from being the point of contact for the other disciplines and also stepped down as project manager. I explain the project management side of things in [this blog].

But as for being the external talent liaison, Art and audio was Nic S, Programming liaison was Josh. There were times of conflict that the programmers were literally in the next room and that it was much easier for me to go and discuss what I needed to than relay it to Josh to then walk next door to do the exact same thing. But one of the problems was that because josh had been the point of contact between the programmers and the rest of the group he ultimately had all of the knowledge on what was going down. And I didn’t. What I thought ‘must’ be right and the right way to do it wasn’t necessarily the case. So stepping away and respecting the fact that he was the liaison was something I actively had to remind myself of.

Stepping Away.

This is one of the first projects that I’ve actually had some pretty heavy personal issues come up that effected my ability to ‘do life’ among other things. Heavy enough that I dropped off the face of the earth for over two weeks. I never said anything to my team mates (and I’m sorry for that), but at the same time without me needing to say anything they were able to pick up on the fact that I needed space and continued to power on without me. While the approach that I took wasn’t the best, because sending a message isn’t that hard at all, It’s important to take away that in a team like ours woven so tight together, If one thread comes loose the entire weave doesn’t fall apart. Foundations and frameworks had been built for the project and as people who allowed the team to follow through on what needed to be done. I like to think that I’m a team player and contribute the best I can so walking away isn’t something I’m particularly familiar with. I’ve been told that sometimes maybe I need to ease off a bit and spend a little bit less time working and a little bit more time for myself before I burn out. This was one of the only times that I’ve barely been able to function as a human being, and I apologise if this is a bit personal for a game development blog (and this isn’t exactly something I’m comfortable with leaving sitting on the internet for public consumption), but I’m leading to a point. I’ve always been so worried about spending time away from part of my career journey because I’m pursuing something that I love and I’m contributing to something that’s part of a larger picture. But in times where we can’t possibly ‘human’, how can we even think about work? We don’t. We shouldn’t.

We need to take care of ourselves before all else.

Balance. The world keeps going. If the frame of the picture you’re in, doesn’t put your well-being as the most important pixel, you worry about you and do what you need to do to be right. I’m writing this in the post-mortem because these wise words have been spoken to me, and I’m speaking them to you. That, and also so when the time comes that I’m sifting through this blog in a few years time I can reflect on this moment and know that I made it through and so did the project.

If you wish to read any of the other post-mortems from any of the other designers:

Adam: Post-mortem.
Josh: Post-mortem.
Nic S: Post-mortem.

Until next time –



Posted by on May 5, 2017 in Uncategorized


Studio 3 – The Past 13 Weeks

Studio 3 and I made/worked on an app for a commercial client – New Intelligence.


I wrote a few blogs on the things I did:

I made a game with Nicholas Staracek – called Freedom Through A Lens.
(Programmed and did UI)


Which got featured on (EDIT: 6) sites!

Waypoint VICE

Jupiter Hadley Resist Jam Favourites





I prototyped a menu where all of the UI elements are physical objects. And this menu aims to eliminate load screens and down time with player interaction, never removing the player from the game.

I watched and took some notes on Anisa Sanusi’s GDC talk on Dark Patterns:

Started to read through a Usability through motion manifesto.

View story at

I worked on a controller based button pressed UI with some juice and random fill origin, method, rotation with some shake.

I Read a book on UI design:.

I made a Turntable UI that works with 3d or 2d elements. Created animations in unity.

I recreated the functionality and part of the aesthetic that Dirt: Showdown uses for their menu.

I converted some GUI (Unity 4) from a paid asset into Unity 5.+ UI and condensed it to not use so much screen space.


Got trained in Interview Techniques from New Intelligence to know the content in order to make the app for them.


Until next time –


Leave a comment

Posted by on May 4, 2017 in Uncategorized


How Can I Be A Commercial Game Dev Other Than Selling A Game?

I am (and my team mates/friends also) are nearing the end of our studies and we have to consider all of the ways game development exists in a commercial market other than make a game and sell it. In our recent journey through studio 3 and working with New Intelligence we were following one of those alternate paths than ‘make a game & sell it’.

We’ve been introduced to a Business Model Canvas.

Business Model Canvas.gif

A Business Model Canvas

I’d love the possibility to continue doing contract work as a UI/UX Designer. So what steps would I take to set up my own business model canvas if I was to pursue this line of work?

1. Customer Segments

Who are my clients? Anyone who has UI or UX. Sure, but more specifically what is the client’s audience? How large of a project am I taking on and whats the time frame they need it in? I’d like to start off on very small scoped projects and as my skills and confidence and contract work experience grow and solidify, expand the horizons and start to join larger teams and projects. Starting with any of the connections I’ve made in university, then onto any of the internet’s very small-sized games who require the assistance of a UI designer. Slowly building the confidence (with possibilities) to move onto to larger games that reach a larger audience.

To me, my goal isn’t to work on large-scale games for the purpose that those games reach more people and more people get to see my work. Sure that’s cool and all, but the goal is to work on a game where my skill set is required, trusted and effectively utilized enough to accomplish the goals that game sets out to achieve. The larger the audience the larger the scale for scrutiny. The goal to work towards larger projects is to push myself as a designer, to constantly be getting better.

2. Value Propositions

What do I bring to the table?

  • To deliver content that achieves its intended experience.
  • To not make users think about what they have to do.
  • To make UI
    • Useful
    • Learnable
    • Memorable
    • Effective
    • Efficient
    • Desirable
    • Delightful

3. Channels

Now that we know who our customers are, and what we are offering, we need to think about how we are going to find customers and what the best channels for doing so is. We found a process that outlines how we would go about finding clients, listed below.

  1. Explore our existing connections.
  2. Reach out and make new ones.
  3. Self advertise on social media, kickstarter, exhibitions
  4. Setting up a meeting to sign contracts and further pitch our service and support. Define handover and future support. Platform updates?
  5. Ongoing consultations as develop the final product.
  6. Ongoing support/updates.

4. Customer Relationships

What type of relationship does each of our customer segment expect us to establish or maintain?

  1. Initial point of contact with our customers, whether it’s online or in person.
  2. Show existing work, or current work in progress.
  3. If spiked interest put together a Concept to give a preview of what we could offer them. Depending on my experience with UI/UX design and the scale of the potential work load, this could be paid work. Paid concept work is a small cost to see potential and still low risk. Phrasing it as ‘and then you don’t have to pay anymore if you don’t feel it’s going the right direction’ would be helpful.
  4. After a green light with the concept, would get a contract signed and move forward into development.
  5. After handover we would have an ongoing relationship. This will vary largely depending on the contract signed and the amount of support/updates needed post handover. This would include setting up lines of communication that won’t go out of action.

5. Revenue Stream

From what sources and how am I going to get revenue?

Concept development – offering this service for free would be too much of a gamble if they didn’t choose to pursue my services afterwords. It’d be like asking a tattoo parlour to tattooist to draft a tattoo for free, then doing it and the customer walking away saying no thanks. It’s a waste of time especially if it’s repetitive. The small amount of money for these individuals or teams shouldn’t matter. And if it does then they probably aren’t seriously interested anyway or willing to invest.

What about content delivery?
This again comes down to the scope of the project or the workload I’m taking on. If it’s a rather large project and for risk management sake, there would be milestones set and payments to be received in order to continue work. This allows them to pull out of the project at any of these milestones and we are still payed for our work. I’d also compare this to what others in the team are getting paid and the amount of work that is asked of me in what time frame.

6. Key Resources

What are some of the resources that this project and business model would need?

  • Software and a place to work. Whether this is at home, while travelling, in a collaborative work space or even in my own work space. Which software I’m using and is there any upfront costs or costs to be accounted for?
  • A list of production costs.
  • Intellectual rights on their work and mine. Agree on who owns what.
  • Human resources – in this case at this current point I or my client might have to outsource art – because art is not my strong point.

7. Key Activities

What are the core activities that I will be undertaking?

  • Content Design – Concept development, what I bring to the table.
  • Content Creation – Create the content.
  • Content Implementation – Implement the content to be usable.
  • UI/UX advice – guide the client and justify reasoning for my choices through experience in this field of expertise.
  • Content Refinement – Make it the best it can be.
  • Content Polish – Make it super juicy or clean and delightful to use or see.
  • Relationship Maintenance – Continue to collaborate and keep contact with client.
  • Delivery – Present content to the client.
  • Support and Updates – ongoing support.

8. Key Partners

These are the key people/companies I could be working with

  • The client – ultimately they’re the ones I’m designing for.
  • Game Designers
  • Game Developers
  • Indie Developers
  • Indie Studios
  • Studios
  • Large Studios
  • Artists
  • Programmers
  • Platform Owners – Unity, Unreal, 3DS max, Blender, AI, Photoshop, Windows, SourceTree, Github, ETC.
  • Customers – the people actually using the product
  • Bank – If I need financial support.

9. Cost Structure

What are the costs to consider while doing this line of work?

  • Living costs
    • Location
    • Vehicle
    • Food
    • Water
    • Internet
  • Travel costs, accommodation.
  • Face to face meetings – wine and dine, self living costs.
  • Equipment and software – platform, licensing, work space.
  • Marketing – website costs.
  • Salaries.
  • Legal costs.

While this is a very general business model plan, it’s hard to be to specific at the moment with so many variables. This isn’t a comprehensive list of all of the things that could possibly go under these headings, because that would make for an extremely long read, plus I think there are people who actually teach this type of thing to a professional capacity.

Until next time –



Image and Video

Business Model Canvas Osterwalder, A. (2013). A Better Way to Think About Your Business Model. Harvard Business Review. Retrieved from

Leave a comment

Posted by on May 3, 2017 in Uncategorized


New Intelligence App – !Play Testing

We had the original timeline of having an alpha by week 7 and a beta by week 8. Week 8 is when we were going to send out beta builds to New Intelligence’s client base (our end users) and start to gather analytics and improve the apps usability based on feedback. But unfortunately this never occurred, not even a little bit later.

There were some forms of play testing though while we were in the paper prototype phase. At some point the team and I were at a point where we were in between transitioning from marvel app to unity. In this transition I had already prototyped some menu stuff to get a feel for what it could look and feel like. This was much different to the menu stuff that we had within Marvelapp. During this transition, each of the team selected a section of the app that they’d prepared and paper prototyped (on post-it notes) each possible screen that a user interaction could lead to. It wasn’t assessing the testers knowledge of the app or assessing them at all. It was solely to asses the apps usability and how people interacted with it, to watch what they thought they could interact with.

New Intelligence with mid circle

Sorry for not having photos of the paper prototypes, they must have gotten lost in the fires. One of the most prominent issues was originally when I made the PROSPECT model unfold in a circular motion, I had a middle circle there to help me position the exact location of where the PROSPECT letters would sit. It didn’t look terrible and it didn’t look not (!)terrible. But every user that tested the paper prototype at one point or another tapped on that centre circle. It did nothing, and after watching every user tap on it made me realize it’s just a distraction and the PROSPECT model doesn’t need a center hub to rotate around,

There was also the point in time that we were far enough in the design process and marvelapp that we had a scheduled Skype meeting with NI. The Skype meeting was to give them a build of the marvelapp that they can run on their phone and to play test it. Marvelapp didn’t allow us to have the functionality that we wanted of the UI at all but allowed us to get a feel for layout, spacing and transitions. The skype meeting with NI allowed us to not only present what we had so far but gave them the opportunity to critique what they liked and didn’t like. But at the same time, run through the application with them and watch how they used it.

In the case that we would have been able to gather analytics on alpha and or beta builds it would have been awesome to gather:

  • How long each user spent on each question.
  • The frequency of which answers were chosen.
  • The percentage of which users got answers right and wrong.
  • How long they spent looking at feedback screens.

Seeing how long each user spent on each question might not give very specific information but would allow us to identify an average completion time and understand which questions are more time-consuming or difficult to answer.

The frequency would allow us to detect what users think the more obvious answers are or which they think are the correct answer.

The percentage of right and wrong would help us to determine if the way the question is being presented is too difficult or too easy. Or the users are having difficulty finding right answers, enabling us to iterate on design and aim to not just test retention but reform questions in ways that still do as intended but in a more decipherable way.

Detecting how long they spent on the feedback screens (in comparison to how many times they’ve completed that question) would be the most intriguing in my opinion. If the users spent a lot of time on the feedback screens (I’d hope) it’s because they’re actually reading the reasons of why the answers were right or wrong. If they’re practically skipping it, did they do it by accident or do they just not care? Because if they don’t care, this app is pointless. The point of the app is to give opportunities to those who have gone through New Intelligence’s training course and practice those skills whilst outside of work times. If they don’t care about the app, do they care about the training they’ve taken? Or will they even put it to use?

Until next time –



Leave a comment

Posted by on May 2, 2017 in Uncategorized