RSS

Monthly Archives: March 2017

New Intelligence: Project Methodology

Critical Chain / Agile hybrid

Also known as Critical Path. We combined the critical chain methodology with the agile methodology.

What is Agile?

Agile management, or agile process management, or simply agile refers to an iterative, incremental method of managing the design and build activities of engineering, information technology and other business areas that aim to provide new product or service development in a highly flexible and interactive manner; an example is its application in Scrum, an original form of agile software development.[1] It requires capable individuals from the relevant business, openness to consistent customer input, and management openness to non-hierarchical forms of leadership. [1] The Agile Manifesto, is centered on four values:

  1. Communication with parties is more important than standard procedures and tools.
  2. Focus on delivering a working application and less focus on providing thorough documentation.
  3. Collaborate more with clients.
  4. Be open to changes instead of freezing the scope of the work.[2]

Modified

What is Critical Chain?

As opposed to waterfall and agile project management, that focus more on schedules and tasks, the critical chain project management methodology is geared more towards solving resource problems. Each project has a certain set of core elements, called a critical chain (sometimes referred as critical path), that establish a project’s minimum timeline. The critical chain methodology devotes adequate resources to this critical chain while devoting enough resources to other tasks such that they can run concurrently, but still have enough of a buffer to reassign resources when needed. This setup is ideal for resource-heavy teams, or for those who have enough flexibility in their team members’ respective skill sets.

Going in to create a serious game for NI, we had the mindset of using an agile project methodology. Purely because this is the first time that the team and I had “worked” for someone else. Although NI were our ‘client’ and seeking our expertise, they are the ones who know the content inside and out and what the apps intention is. Ultimately we are following their lead, and they ours. Leaning on each others expertise. It was never going to be a straight forward project for so many reasons that i’ll get to talking about in the post-mortem. We knew there was a start and end date, but the in between was bound to change. We were never going to know the exact timeline so it needed to be flexible. We’d also have to account change for collaboration or meetings between New Intelligence and the team. We ended up adapting and morphing an agile methodology with the critical chain methodology. We needed critical chain because we did have core elements that needed attention, but the the timing of which changes. Also because Agile puts more of a focus on delivering a working application than documentation, but this project still relied VERY heavily on documentation.

  • A – So we knew what we were doing.
  • B – So we didn’t forget what we were doing.
  • C – We needed to figure out the systems and what exactly is going into this app.
  • D – We needed others to understand what we were doing.
  • E – What if we were going to continue working on this after the delivery? What if someone else is?

The critical chain methodology helps us identify the most urgent task and work towards is. It also helps us identify deadlines that we need to work towards and set focus on that. We know that there are milestones and that those milestones might change. The milestones content might change, or the time of achieving this milestone might be pushed forward or back. Critical chain also helps us adequately assign our valuable resources to work towards specific outcomes whilst still assigning resources to other tasks that can progress side by side without depending on each other to progress forward.

The projected timeline

Initial Project Timeline


To wrap up my points above, this has been the first project where I’ve actively stepped down from the project management role. However, I’ve happily shared all the tips and tricks that I’ve learnt along the way, to help better enrich the knowledge of our project manager in this instance. Ultimately it’s his say on how the project will be run, and what our approach to tasks and deadlines will be. My personal mentality is that, he says we’re doing it this way, and I say “okay”.

Until next time –

Nic


Bibliography

Text

5 Effective Project Management Methodologies and When to Use Them. (2017). Explore.easyprojects.net. Retrieved from https://explore.easyprojects.net/blog/project-management-methodologies

Agile management. (2017). En.wikipedia.org. Retrieved from https://en.wikipedia.org/wiki/Agile_management

Advertisements
 
2 Comments

Posted by on March 31, 2017 in Game Dev

 

New Intelligence: Tools & Paper Prototyping

IMG_3019.JPG

The gang of designers

Over the last few weeks the design team and I have been exploring different areas and tools to help us in the process of creating the serious game for New Intelligence. After we’d done the training course and analyzed a few different types of games and how they present information we came together and found the commonalities between what NI thought needed the most attention and what we thought needed attention. One of the tools that has actually been extremely handy to have is a giant whiteboard (which is the size of a wall) to quickly dump a large sum and variation of information.

IMG_2964After we identified the commonalities we all ended up having a half hour rapid brainstorm session. The brainstorm session was to jot down as many different ideas that we could, that was the summation of all of the content we had learnt and what needed to be in the app but in-game form. This was the time to start transferring all of this knowledge into activities that can played/used by players in the app. We wrote down our ideas on post-it notes. Post-it notes worked really well for a few reasons.

  • They work well in a collaborative work space. You can all end up putting them in a specific area within a room like a pin board for everyone to see while brainstorming and afterwards.
  • They limit the amount of information you can put on one, so it ends up being a more condensed version and somewhat easily decipherable by others.
  • Because they limit the amount of information it means there’s less time spent sitting on a single idea, it almost forces you to push onto another.
  • It makes you constantly move to put the post-it note onto the ‘pin board’. Makes you stimulate body and mind. I find that this gets all kinds of creative juices flowing and improves workflow.

The result:

This slideshow requires JavaScript.

Then after that, we each wrote on the whiteboard which ideas we liked best out of all of them. As well as another consolidation of the most frequent ideas that came up.

This slideshow requires JavaScript.

After we had commonalities, we assigned particular tasks and split up to start drafting up how particular exercises on post it notes and how they’d work within the app. To testing if any of these ideas could possibly work or what they felt like to operate. And much like what we’ve done in other stages of game development  is paper prototype!

This slideshow requires JavaScript.

There were large sheets of paper to test on and post it notes along with a whole box of goodies that can used for paper prototyping. In Adam’s case, he was using some of the large sheets of paper with lots of goodies from the box. Having physical objects that can moved around, simulating what the apps screen might look like. Us others stuck to post-it notes, because these sized post it notes are almost the same size as what 2016-2017 modern-day mobile screen sizes are. If we could fit our prototype content onto these similar size post-it notes, we know it could translate onto phones in a similar fashion. If it fits onto these sizes, it’s only going to be easier to interpret and decipher on larger screens such as tablets.

17622715_10208785383696464_1872970218_o.jpg

Some of my paper phone prototypes


Eventually it came to a point in time where we should start seeing where these elements fit into the app and how the app flowed. This was/is another ongoing conversation because this is iterative design. It was back to the drawing board – literally. We started to figure out how and where each of the game elements worked. Using the bit white board ‘wall’ as a plotting space to compile all of the common aspects and activities we wanted to be in the app.

This slideshow requires JavaScript.

Once there was enough knowledge on how the app could flow and some of the prototype content could fit in it was about time to move onto testing how this would actually look/feel/operate on a digital device. So luckily being in the 21st century and having technology at the finger tips, instead of blasting straight into unity which is what we’re pretty much used to doing, we took another approach. The internet has plenty of places to do “proper” or “mockup” UI designs. And one of the greatest things about using one of these sites is that if we made this quick prototype in unity it might have felt like this was the beginning of design. Sure it could be iterated on but we might have been afraid to discard it. Being on an external site means we have to discard it. Rather than sticking with the first thing we try, explore different ways of doing things. Here’s a few sites that we tested that enabled us to do what we wanted for free:

Now without going into great detail about whats so good and bad about each of them I’m going to do a quick summary. They all do what they’re supposed to. Some have minor differences in aesthetics and what it enables you to do or what is available to use. Some were Mac only. The one that stood out was Marvelapp. For a number of reasons.

  1. Multiple people can be invited to the project and work simultaneously.
  2. There’s an app that you can download and view your project on.
  3. Within the app you can play through your design.
  4. The app is free also.
  5. You can send links to people to play through your design!
  6. You can record exactly what users do!
  7. It updates live (when there aren’t any syncing problems).
  8. It updates live on mobile devices (when there aren’t syncing problems).
  9. It can be used on both PC and Mac.

Here’s a list of cool features that it can do. It lacks a few features that makes the difference like simple copy and paste or drag and drop functionality in play mode. Parts of the simple functionality was missing which makes it a little more time-consuming to make it do VERY simple screen changes which gets the idea across, but the app itself makes accessibility a breeze. But when it came to sending the link to NI and getting them to run through the app in its most basic form and to get the message across and for them to understand it confirms it was the right choice.

Marvelapp.JPG

Until next time –

Nic

 
2 Comments

Posted by on March 23, 2017 in Uncategorized

 

The Choices Within: My Friend Game Design Ethics

In recent weeks I’ve had discussions with some of my colleagues about ethics in game design. We each jumped onto PhilosphyExperiments.com and ran through a couple of scenarios. These specific instances aren’t tied to game development at all. They present particular scenarios and options to resolve them. At first I was expecting them result in particular ways in which gave me a massive novel of ‘how I did’ and the implications behind the choices I made. But it didn’t. Throughout it gave me feedback on the decisions I made and told me whether some of my choices conflicted with earlier ones,  whether I was consistent in my choices. Essentially, it tests for what you believe is right or wrong, or just what you believe in.

Dean Takahashi said:

Each person’s definition of what is ethical changes.

Everyone grows up in different environments, with different people, different conditions, different life experiences, different teachers, different perspectives, different religious views which all culminates, ultimately to different beliefs. So to me, ethics isn’t about being right or wrong, it’s about what we believe is right or wrong. It’s where we draw the line. To me: Ethics only matter when a living form is impacted. Whether it’s you as the individual, or someone else, or the resulting action impacts a living form. Humans or animals or both.

The IGDA (International Game Developers Association) has a Code of Ethics. If you haven’t read it yet, please do, it’s really straight forward and covers the basis of a lot of topics. I’m not part of the IGDA but their 3 sections, more so section 1 and 2, are immediately applicable and processes that I already currently follow and have followed  because they tie very closely to my beliefs.

So in saying all of this, in my game development journey/experience so far I have never been asked to do or create anything that has pushed me to that line. That threshold of what I think is right or wrong. Nor have I asked anyone to do something for me that pushes them to their line. I’m up to the point in my career where I’ve fumbled around in the dark enough to grasp the basics of game development and the tools to required to make ‘games’.

I’ve found my footing.

Now is where the games I create are truly starting to shine as a cohesive whole. Rather than ‘here’s some mechanics I tried to make‘, it’s ‘here is an actual game‘ or ‘here is an actual game that provokes a particular experience‘. Most of which don’t (not my intent) or shouldn’t conflict with anyone else’s thresholds. For example – A game called ‘TeTron‘. A hovercraft in a futuristic Tron’ish looking space where you collect tetris blocks and deliver them to a black hole. Nothing controversial right? But if in any case you as a reader have ended up playing TeTron and found anything misrepresented I’d be more than happy to talk about it. It’s a dull example but it leads me to my next point.

Feeding The Forgotten

“With inspiration drawn from my recent travels to PAX Aus in Melbourne (2016) and other travels within and around my hometown of Brisbane, I wanted to put the player in the shoes of a person who treats everyone as if they were equal. With the world what it is today, we all have the power within us to help those who are in need, or less fortunate. And in the process hopefully inspire others to do so too. With fictional characters who have real world issues, I wanted to portray these characters for who they are, as people.”

Feeding the Forgotten is one of the only game’s so far that has required me to properly consider ethics in game design. This was a game where I was constantly jumping back and forth over my own threshold of what I thought was appropriate. Especially because this is a product for consumers. My opinions that are presented in this game will ultimately be viewed and digested by these consumers. But not only that, the representation of everything within the game too. The representation of elements in the game are purely the only bridge of communication between what I was trying to get across. And the representation of which needed to be a good medium of my intention but also a fair and accurate representation.

In the IGDA code of ethics – Section 1 point number 7.
Strive to create content appropriate for our stated audience, and never misrepresent or hide content from committees assigned to review content for communication to the public, and specifically we will work strenuously to cooperate with and support local/regional ratings boards.

Never misrepresent content.

That’s exactly what I wanted to do (not misrepresent content). I’m representing artificial human construction’s that mimic real interpretations of the concept of people who don’t have a house to call their own. As well as constructing stories based on real life issues that contribute to putting these people in the positions they’re in. To me, this is a very delicate subject, because this is the representation of some peoples lives. People actually have to go through this. So in saying that: if these people and stories were misrepresented, not only would the original intent of this game be completely out the window, it could offend any of the audience who have any or more knowledge than I do on the subject matter. And as a game developer not only do I feel that it is my duty to represent content as accurately as possible.

I Want To.

In order to do so I needed to research. And research was done, but the rest of this isn’t going to be about the research. One of the main thing’s that I wanted to get across was that these people aren’t in the situations they’re in because of drugs. Because there are many more reasons. But unfortunately I can only get across the 7 that I have within the game because of a little thing called ‘scope’ and deadlines. I just wanted to address some of the possible reasons as to why this occurs. And bring to attention that they are still people.

Magoo was one of the people I ended up running into regularly. Funnily enough, it was at the bridge.


New Intelligence

As I’ve mentioned in a previous blog New Intelligence have asked us to make an app for them. Part of the process has been that they’ve provided us with the content and training required in order to understand what they do and what their content is. They ran us through the training course that they provide to their client base. The training course that they actually charge money to go participate in. They provided this training course to us free of charge.

Part of the agreement is that we are free to show our work but not give away the content for free. So in some further blogs that might explain bits and details of parts of the app we’re making, it will never ever be content heavy. Even though this isn’t in a contract (but I’d be happy for it to be) as a developer I wish to honor this agreement. It ties directly into ethical design, in the aspect that a company has devoted their time and their money into researching and creating a commercial product. Much like game development (in a later stage where I might be charging money for games or working on games what will have a commercial purpose), I’d like to respect step 3 of the IGDA Code of ethics and avoid giving away their course content for free.

Respect intellectual property rights.

Until next time –

Nic

 
1 Comment

Posted by on March 20, 2017 in Uncategorized

 

Freedom Through A Lens UI – Part 2: Selecting Photographs

This is part 2 for some of the UI that I developed in Freedom Through A Lens. This is about once everyone has been spoken to, the photographs have been taken, and the photojournalist is back at their office choosing which photos to use in their piece of media. I say a piece of media because the specificity of which is left open for interpretation.

End Menu 1

Everything here is UI. But like I discussed in part 1 this UI canvas is a child of another separate Camera and it’s render mode isn’t Screen Space: Overlay, it’s using Screen Space: Camera (this camera). The intent behind this section is that once the photojournalist got back to the office they’d have a single photo of all of the people who allowed them to do so. But the presentation of which, because there would be a small amount of photos, they would be scattered across the desk in a non-organized fashion. But scattered in a way that every photo wasn’t completely covered by another. Then once the player hovered over an image, it would move towards the camera enlarging it. If they selected the image, it would translate back to an already set spot. If they stopped hovering over the photo, it would translate back to its original position. There was a maximum of three photos that could fit on the piece of media. Once three photos had been selected – there would be a confirmation if these were the photos they wanted to use. If they chose to re-select the photos, all of the photos would translate back to their starting position.  The remaining images that had not been selected, moved off-screen. Originally as I set this up, there was always going to be 7 photos, 3 of which got chosen and 4 got discarded. When there were more photos not being selected it was more effective to watch.

Reselect

Re-selecting


All of this was done by Mathf.Lerp the RectTransform of UI images. The new part that I had to learn was that UI image transform’s work a little different to regular Transforms.

Transform And Rect Transform.JPG

Transforms always move from the pivot point of the object, not the centre. Which is the opposite with RectTransform’s, along with the syntax of how to move objects.

Transforms are moved by transform.position.

Rect Transforms are moved by rectTransform.anchoredPosition or anchoredPosition3D.

The anchored position is wherever the anchor is set.

UI Pivot.png

Anchored Position 1

Scene view – from anchor point to position

The four little white arrows are the pivot points. They’re moving in 3 dimensions from the anchor point (x,y,z VALUES) to the position of the set object (x,y,z VALUES). When the moving object’s anchor point is centre, and its x,y,z are zero, it moves the centre of the object to the x,y,z values of the set object. Whereas if the moving object’s anchor point is not centre – and for example is top left, and those values are zero. The zero values still lerp to the set objects x,y,z values, but in screen space the anchor point moves to a completely different location.

Anchor Points Comparison to Move to Values with red 1

Centre Pivot

Anchor Points Comparison to Move to Values with red 2

Top Left Pivot

Anchor Point 1

Game View

Anchor Point 2

Scene View

Anchor Point Position Lerp.gif

Side on view to see movement on the Z axis.


So all of this is being done where every photo has a manually placed object to move to. That move to object is a child object of the same object the photo is a child to.

Heirachy.JPG

So this is all done under the same parent object essentially. I’ve tried to make it so that the photo could lerp to a position of an object that isn’t in an immediate child transform of the same parent. And it doesn’t end up working well. I’m going to have an educated guess because when the object that was a child of the same parent moves to just be a child of the canvas, its x,y,z values get’s changed. And much like having different anchored positions – because the x,y,z values are lerping from their current state to the new ones, the on-screen position are very different. So unfortunately every photo has its own manually placed move to objects. Because they each need more than a single position to move to.


Hovering & Selecting

In order to hover over a photograph and initiate the lerp to position I wanted to use the Unity EvenTrigger that can be used just in the inspector.
Specifically PointerEnter and PointerExit.

EventTrigger

When the cursor has entered the button: Move towards the desired object.
When the cursor has exited the button: Move away form the desired object.
The cursor entering and exiting the button toggled the moveTo bool. The bool is part of a script that’s attached to the button. The enter and exit accesses the script on that button and runs either one of two methods to turn the bool true or to turn the bool false.
The lerp operates from a 0 to 1 (0% to 1o0%) and the timer is just the percentage. If true – up the percentage, if not, down the percentage.

PositionLerpToPositionLerpToMethod

MoveTobug.gif

But then this happens

See the jitter? That’s because the mouse is entering and exiting the bounds of the photo in a very short time span. I needed to follow the image to stop it from doing this, and that’s not what I wanted at all. That’s also not what I wanted players to have to do either. So there needed to be some kind of buddy system. A zone that the cursor could enter to trigger the movement, but also so that if the cursor was within that zone or still within the photo the bool would remain true.

This slideshow requires JavaScript.

The solution was to have button (with a non-transparent image) that can’t be interacted with in the exact same position and size as the photo. This is the parent object. This is the master object that will detect the cursor enter and exit and have the script that controls the toggle of the bool.

Flow

Flow

The Fake area is the object with the event trigger and the script that has the method to get the ‘myButton’ objects script and turn its moveTo bool to true or false.
When I tested this out, even though I hovered over the fake area to trigger the move and while the mouse was still hovered over the photograph – even though parts of the photograph or the cursor weren’t within the fake area, the photo still lerped to the correct position.

I had the desired effect, but I was curious as to why it did what it did and how. I fiddled around with some things and discovered that because the photograph is still a child object of the fake area object the image that has ‘Raycast Target‘ bool is part of the system that checks for the mouse hover. With the image on the photograph that also has Raycast Target still parses info to the parent and states the cursor is still being hovered over by the mouse. COOL HUH?!

SetSiblingInde.gif

UI Child Index Issue

There was only one last problem to solve, and it was an easy one thankfully. Unity 5.5’s UI system renders the last child object first. I might not be saying that right, but the furthest child from the canvas in the hierarchy gets rendered first.

UI rendering

The solution is to just set the index of the object that’s being hovered over to a specific child index of this objects parent. In this case if the amount of object changed over time and this required some manual upkeep every now and then.

End.gif

There’s more to talk about with this but I’ll conclude it at that for now.

Until next time –

Nic

 
2 Comments

Posted by on March 15, 2017 in Uncategorized

 

Freedom Through A Lens UI – Part 1: Main Menu Camera Flash Effect

Nic Staracek asked me to help him create a user interface (UI) for his #ResistJam game:

Freedom Through A Lens

You can download it here.

I’ll call Nic, “Staracek” for the purpose of these specific blogs, because my name is Also Nic and I might get confused saying my own/his name every now and then.

What Staracek wanted was a camera flash (as if you were taking a photo) to accompany transitioning between menu pages (among other things). I’m no artist, and neither is he. So we can’t easily (at the moment) make a super cool visual effect, image, shader etc that easily allows us to get the effect he desired.

Revolution.gif

We fiddled with a few options that could possibly replicate or do similar to the camera flash.

Test Camera Flash

Quick Test Option 1

Black Test Flash

Quick Test Option 2

We quickly tried (EMPHASIS ON QUICKLY) having a UI image and just scaling it up quickly. We though that no matter how pretty the image would be, it wouldn’t feel right. It has such an abrupt edge. Even with an image that had a softer edge still didn’t feel right. What about if there was just a solid screen overlay that was white and the transparency just faded in and out?

Test White Canvasgroup alpha

UI Image With CanvasGroup Alpha moving between none and full

It felt better but it was missing a critical element to how camera flashes operate. It didn’t have an origin point, it was kind of just everywhere. I thought about how light actually affects objects and the possibility of having a light in the UI.

Light In World Space.JPG

UI Light In World Space

But there was a problem. The light even though a child of the UI Canvas – did not affect the other objects. The light was positioned in coordinates relative to the UI Canvas and it seemed like the correct place, but from what I can gather, the light in physical space and UI Canvas images interact very differently. So without spending too much time wondering how we could make that work, I jumped to the thought of light touches physical objects, why not make the menu a physical object?

Create a 3D plane object (rotate it to face the camera) and drop the image sprite onto it so it converts it into a material. At this point the camera had a Perspective projection, it still cared about depth. I created a separate Camera and switched its projection to Orthographic and untagged it as ‘Main Camera’ to avoid any possible confusion with cameras later. The Menu camera no longer cared about depth (z-axis) and only cared about objects positioned in front of it on the X and Y Axis.

Orthographic Camera.JPG

Orthographic Camera

Now that the main menu is a physical object on a plane and the material on it effects how the light interacts with it, I fiddled with its material properties to adjust how it looked for the camera.

UI Image Standard FTAL

Standard UI Image

Physical Menu Object Material Difference

Physical Menu Material Properties Shifting

Another factor that also effected what the main menu object looked like was the skybox. Because in this instance, part of the lighting came from the skybox.

Main Menu Physical Object Skybox Lighting as Default

Main Menu Physical Object with Default Skybox as Lighting Source

Main Menu Physical Object Skybox Lighting.JPG

Main Menu Physical Object with Default Skybox as Lighting Source

Comparison

This slideshow requires JavaScript.


Now that the menu image is a physical object that can interact with light, it needs a light that isn’t the directional light. A point light covers areas relative to its position without specif is the one I chose and gave it a ridiculous amount of range and cranked the intensity to max in attempt to cover the surface area of the image. Whilst also having it positioned so it’s origin starts where the camera flash would be.

Light Positioning

Light Positioning

The light alone on full intensity and a ridiculous range didn’t cover the image.Light 1.gif

Moving the light from the exact position of the image away had the relative radius outwards flash, but after a certain point the light just started to fade away, as a point light would when it’s reach is too far from the surface. So there needed to be other layers in order for this to work. The next step was adding a ‘Sun’ flare to the point light, to really get that light crankin’.

Sun Flare

Added Sun Flare

Anything on the -y position meant in was translating backwards in world space. If it’s position on the Y was positive, it meant it was ‘behind’ the object It’s supposed to be on the z-axis because that’s forwards and backwards, but for whatever reason it’s empty game object parent had a rotation of 90 on the X. Because it’s a child of a rotation, technically Y is up towards the parent transform, but in world space it was equivalent of the Z axis, so I just ran with it. It might have not been best practice, but hey, it was an 8 day game jam – sometimes it gets messy. Anywho, as soon as it was in ‘front’ of the main menu image, the flare was really up in your face. Turns out though, the further away from the object it was, the less the flare was apparent. It was doing the reverse of the point light alone. The intensity of the light had to be decreased too, because 8 intensity with a sun flare was melting my face. 0.87f seemed like the perfect amount of intensity in combination with the flare. The light completely dissipates at very close to -100, and as the closer it gets to that number, it hits a point where the flare starts to create a flash like light right where the flash on a camera should be.

Light 2.gif

The movement that ended up being the most appropriate was from far away to then move closer. Now it’s at the stage where the light and the flare alone still aren’t covering the entire image. So in combination with the flare, I cranked back up the intensity of the point light to 8.

Light 3.gif

From -100 on the Y axis, to -20 on the Y is the most effective positioning for the light. Anything closer to 0 from -20 didn’t have any effect of the coverage of the image. What else makes stuff brighter?

BLOOM

Light 4.gif

Bloom From 0.1 – 2

In order to reach and cover the rest of the image, bloom intensity had to be 2. With everything in combination with each other, this is what created that flash effect, on a physical object in space. No matter the distance of the image to the camera. But the distance of the light relative to the object in space is what mattered. I had all the components I needed to make this camera flash, the next step was to automate it. Do all of the aforementioned in a sequence to flash up, and then flash down.

The Light had to have a -Y distance of exactly 100, the intensity of the light had to be exactly 0.87f in combination with the flare, and the bloom on the camera had a maximum of two. The solution was to

  • Mathf.Lerp the transform position of the light (from -100 to -20) at a particular rate that made it imitate a camera flash (this took some iteration).
  • Mathf.Lerp the intensity of the light from 0.87 to the maximum of 8 IF the lights position was -40 or closer.
  • At the same time as blasting the intensity, Mathf.Lerp the intensity of the bloom on the camera from 0.1 to 2.

Then once that worked, do the exact same in reverse.


So now that the ‘Image & Flash’ for the main menu had been set up, the menu still needed a UI canvas to interact with, but this time instead of defaulting to Render mode of

Screen Space – Overlay, I only wanted this UI canvas to work with this specific camera.

Camera.png

After wiring up the menu, the only thing left to do was to slot the transition of what we needed to happen
between when the flash was at its maximum and right before it started to flash down.

GIF.gif

I’m really proud of this. And this is only part one of a two part series. The next blog talks about moving Rect Transforms and automated sequences.

Until next time –

Nic

 
3 Comments

Posted by on March 13, 2017 in Uncategorized