Monday, November 14, 2011

Reading Designing with the Mind in Mind by Jeff Johnson: Thinking about and understanding the psychological and rhetorical implications of UI Design.

Another realm of visual design that the artists explored was the psychological and rhetorical implications behind user interfaces.  By understanding how the human brain and visual center operate they would be able to cater to the natural tendencies of users to create a more intuitive and effective interface design.
Our most basic and obvious intention for the user interface design was to make it as intuitive as possible.  In order to learn which design principles would allow us to do so we looked at a book called Designing with the Mind in Mind by Jeff Johnson.  This book was filled with essential guidelines, both psychological and visual, for building a user friendly interface.
The tricky part of this design process was that we were working within the confines of an already established art style.  This meant that basic guidelines in the book, like those about color, would be out of our control.  Instead we looked towards design principles associated with the layout, structure and textual aspects of the interface, and even those were already prominent in Picturiffic.
The book asserted that as humans we have a natural tendency to organize visual information as well as simplify it for our own understanding.  The Gestalt principles eluded to how one can use this innate behavior to both avoid bad design and improve good design.  We found that the proximity, similarity and figure-ground principles were the most applicable to the work we would be doing.
The proximity principle states that the distance between objects can determine how we interpret their functions and whether or not we perceive them as groups.  This made us consider the spatial relationships and boundaries between different objects in the interface.
Something like the Daily Puzzle box that we designed for the home screen groups all of the relative information to the Daily Puzzle together.  This prevents the user from confusing things like the timer with other game modes.  We wouldn’t want them to think that they couldn’t play any game mode at all until the timer was reset.  We also apply the proximity principle in our settings menu design.  Since we relocated all of the player information into this menu it was essential that we displayed it as a group.  By confining the different player currencies and level status bar to a separate bounding box on the screen it’s made clear that they’re all related chunks of information.
Elements of proximity were also already implemented in Picturiffic.  Things like the hearts and charms were all closely grouped together with one another which made them seem related in function or purpose.
The similarity principle states that objects that are similar in appearance appear grouped.  This is important for the user because it allows them to easily identify the functions of different elements on the screen.
This principle was somewhat already established in Picturiffic.  Things like buttons all had similar shapes, with two rounding corners and two sharp ones.  This allows the user to quickly realize what a button is on the screen versus what isn’t.  It was important for us to maintain design principles like this because it not only allows new users to more easily interpret objects on the screen but it also allows previous users of the game to readily jump into the game play because they’re able to recognize the features they’re already familiar with.
Since some elements in the interface already adapted both of these principles in the Facebook version of Picturiffic it was important for us to maintain a consistency throughout our new designs.  Without consistency we would be wasting the users brain power on figuring out what does what rather than playing the game itself.
One last principle that we adopted was the figure-ground principle.  This states that we naturally split our visual field into figures (foreground) and the ground (background).  By careful visual manipulation we can bring elements to the foreground temporarily, focusing the users attention for a brief moment.
The most notable use of this principle is in our settings menu design.  When the user clicks the settings button a semi transparent box opens up over the game board (which is now temporarily the background) and focuses the user on the new foreground elements that make up the settings menu.  This also helps maintain the orientation of the user and their goals because it doesn’t directly replace the existing information (i.e. the game play) but instead temporarily subsides it behind a transparent box.
As I stated previously this was a tricky task because we were working with an established design.  We had to maintain a certain consistency with things like color, shape, spacing and animations but at the same time we had to implement this consistency in a new visual structure.  There was an intimate balance between the consistency and intuitive nature of our designs.  Often we would try to make minor changes to the art treatment to make the design more intuitive but that would detract from the original art conventions of the game.  So the challenge really stemmed from the necessity to redesign an existing design while staying true its original nature.
A few other notable ideas that came from this book involved our tendency to perceive structure.  By creating a visual hierarchy in an interface the user is more able to scan and interpret information presented to them.  In the original art treatment for Picturiffic there were quite a few visual elements that established a visual hierarchy.  An example is the use of the purple button.  Purple buttons are meant to appear as a more important action that a user can take.  Another example are the exclamations.  These are the chunks of information that fly into the screen and often take prominence as the foreground on the screen.  The combination of their appearance and their animation make them seem like a higher order in the visual hierarchy.
Another challenge we had was the textual aspect of the interface design.  Naturally, we’re not wired to read so poor design and wording of textual information can often detract from the intuitive nature of an interface.  The author suggested to minimize the need for the user to read, which is also something echoed by Large Animal Games.  He also suggests to avoid complex or tiny font, patterned backgrounds and centered blocks of text, and instead, enforce the reading by using plain language, saying what you need to say in a mild and simple manner, as well as using a font format that
All of these principles serve to create a more clear and concise communicative style of information.  It is not the individual principles that accomplish this though, but rather the fusion of them which allows the design to communicate itself in a way that is natural to understand.

Sources:


Original Tech Research

When the project began, our goal was to port the Picturrific game from Flash on Facebook to HTML5. Large Animal wants to move Picturiffic to iOS, on devices such as the Apple iPhone and iPad. Some issues arose while we did some initial research in this area to start the process.

First off, MobileSafari includes some restrictions on sound files:
1. Audio files may not be pre-loaded or played automatically. They may only be played as the result of user input, such as a button press.
2. Only one audio file may be loaded at a time. This prevents the use of simultaneous audio, such as background music and sound effects playing concurrently.

The solution to these problems was to release the game as a native iOS application (“app”), parallel to its HTML5 release. One option for supporting this parallel release was to use a piece of middleware, such as Phonegap. Phonegap takes HTML and Javascript files and compiles them into a file usable by mobile devices, such as .apk files for Android devices, or files written in Objective-C for iOS devices. Because Phonegap compiles the files for the web version into native language automatically, the code only needs to be written once, and could then be deployed in multiple ways. This was not an efficient way of producing clean code, but it was the easiest pipeline we discovered.

With the sound problem solved, we turned to the task of actually generating code. We experimented with the tool Google Web Toolkit (GWT), which compiles Java code into Javascript for use with HTML documents on websites. GWT itself worked well and allowed us access to HTML5 elements such as the canvas, one of the core additions to HTML5.

The next step was to create a specific pipeline to integrate art assets, such as animation, with the code generated by GWT. The team looked into two HTML5 animation tools to complete this task: Adobe Edge and Sencha Animator.

Adobe Edge stood out as a very promising tool. It provides artists with tools similar to those of Adobe Flash, such as the timeline and stage, and outputs animations as HTML5 webpages. However, Edge is still in a preview stage (which means that Adobe only released it to generate buzz about its capabilities), and it is not complete enough to integrate its animations into other projects. Because it can only output Javascript and HTML, Edge doesn't interface well with GWT, and complicates the pipeline. Additionally, the animations need to be static; there isn't yet a way to use dynamic elements from the page. Because of this, the team had to abandon Edge despite its potential.

Sencha Animator is another HTML5 animation tool in development. However, Sencha creates animations through CSS3, and appears to be much better suited for content like advertisements, which can be played continuously and don't depend on user input. Because of this, the team decided that Sencha would not be robust enough of a tool to adequately create the animations needed in the game and decided to take another route.

The final HTML5 approach the team considered was procedural animation. The artists would plan out the timing and positioning of animations, and the programmers would hard-code them into the game. In the end, it was decided that this route would devote too much of the team's time to working out animation and not enough to making the game fun, playable and polished.

With no clear pipeline to unite the code and art assets, the team decided to pursue Flash development instead of HTML5. While MobileSafari doesn't support the Flash Player plug-in, it is possible to use Flash to create native apps. In September 2010, Apple lifted the restriction on third-party app-creation software. As a result, Adobe added a packager to their Air platform to allow programs written in Actionscript to be released on iOS devices.

Flash apps on iOS work differently from other uses of Flash. Normally, Flash executes Actionscript commands at run-time. This is how Flash works on traditional web-browsers and in apps published through Air for Android devices. Apple, however, forbids the Just-in-Time compilation Actionscript usually relies on. Instead, when code is published through Air for iOS, it is compiled into Objective-C, making it compliant with Apple's app requirements.

Flash Optimization Research

Due to the working build's less-than-ideal framerate of late, we dedicated some time to researching how to optimize the performance of Flash projects, specifically on mobile platforms. One specific idea was to divide the FLA into scenes, theoretically splitting the load times between them and lowering the maximum memory usage of the game.

In Flash CS5: The Missing Manual, Fourth Edition, they explain that while scenes are excellent for organization, they provide little to no benefit for the audience of the flash project. This is because Flash doesn't treat scenes the way I had anticipated; when it publishes the game into a SWF file, it stores everything in one big timeline. This means using scenes doesn't actually provide the performance boost that I had thought it might, although it does make the file more organized and easy to work with.

In fact, the only method of optimization explained by the book is that of multiple SWF files. The idea is that if you divide the project into multiple SWF's and have one main file load the others, it reduces the amount that the user has to have loaded into memory at any given time. As a side note, this is how I (mistakenly) hoped scenes would work.

So I set off to find out more options for optimization. In my travels, I discovered a forum post that practically oozed information. What I discovered was that the methods are highly dependent on whether your performance bottleneck is due to CPU or memory deficiency. Specifically, how animated sprites are loaded or drawn on the game's canvas. Several methods were explained in the first post alone.

The first option was pixel Blitting. Essentially, what previously was a bunch of MovieClip objects is shoved into one large bitmap object. The code tells it which part of the file is needed for each object within it, and loads that. This drastically reduces the amount of RAM needed for loading all the objects in your game, but unfortunately they mention that it suffers large performance drops when used in high-resolution mobile devices like the iPad, probably due to high CPU load.

The second option was using a different MovieClip object (with bitmap data, not vector images) and moving those individually around the stage. While this method seems to work fine in situations like the iPod example above, it requires substantially more RAM than pixel Blitting. This option is closest to what Picturiffic currently does, except Picturiffic uses a fair number of vector data as well as bitmaps.

It's important to note that these two options can be combined, by using multiple MovieClip objects, while having each object contain a single bitmap that uses Blitting for the animation.

Flash has two methods of manipulating bitmap data; the draw method and the copyPixel method. In general, draw is slower than copyPixel. The draw method is used to convert vector data into bitmap data which reduces RAM usage but increases CPU load (and actually runs faster in CPU render mode), and the copyPixel method is used to simply load existing images that must be stored as BitmapData.

The author of the post also mentions the distinction of CPU vs GPU render modes for mobile devices within the publish settings. Choosing the right mode for the optimization methods you use can make a huge difference. They also mention some serious disadvantages to the Android GPU render mode; it doesn't support filter rendering for objects on the stage. This means all the glow and drop shadow effects used in Picturiffic aren't supported. In addition, any objects on or near the filtered object will also suffer decreased performance. Apparently it is still possible to achieve filter effects by using the draw method on the filtered objects off-stage, drawing it as a bitmap, and then placing it on the stage.

The author also provides helpful links at the bottom of the page.

Sources:

Flash CS5: The Missing Manual, Fourth Edition

http://www.flashgameblog.at/blog-post/2010/04/08/blitting-the-art-of-fast-pixel-drawing/
http://www.kirupa.com/forum/showthread.php?359867-How-to-Optimize-Flash-Sprite-Animations-for-Mobile
http://www.unitedmindset.com/jonbcampos/2010/09/08/optimization-techniques-for-air-for-android-apps/
http://blog.newmovieclip.com/2010/11/13/adobe-air-mobile-application-performance-optimization-on-android/


Saturday, November 5, 2011

Progress!

After a stressful beginning to B-Term the team has managed to finally get a working desktop AIR build running at a less-than-ideal but manageable framerate.  This is going to open new doors to start the process of getting the build running on a mobile device (most likely Android first).  It's also going to allow the artists to start making the necessary adjustments to assets in the FLA in order to get everything looking as it should on the stage.  This includes centering animations, proper timing for transitions, and resizing and arranging elements in their proper place. 

This is great and exciting news and things are looking up for the team to deliver a reasonable end-product at the end of this term.

Our major obstacle is going to be optimization so that we can get the game running flawlessly, which is going to be collaborative task for both the art and tech teams.

Thursday, November 3, 2011

Initial Comps

The next step from the sketches that Mike and I did were to translate them into simulated screenshots using existing game assets.  This would allow us to get a very clear sense of how elements would look and be arranged on the screen.

We regularly exchanged these comps with Picturiffic's artist, Shiho.  She would provide us with critical feedback which eventually narrowed down our designs to what the screens that we would actually use in game.

One struggle Mike and I had was adhering to the company's art treatment in the initial stages of comping.  This was sort of an exciting real world, industry experience where we had to adhere to certain guidelines to maintain the intended style and feel of the game.

Below is a sort of timeline of the changes that the comps went through.  They are primarily focused on the home and daily puzzle screen which were the two most prominent screens that would be featured in our mobile application.
This is a critical point in the comping stage where it was brought to our attention that we were straying away from the artistic style of Picturiffic.  I think it's evident that we made the drastic changes in our approach to these next designs that were necessary to adhere to the Picturiffic Style.  
 These are the initial comps that I shared with Shiho for the actual game play screen as well as an idea for displaying the leaderboard in a limited space.


This settings menu is an implementation of an idea that I had which was to strip all of the player information off of the puzzle screen and dock it in the settings menu.  My reasoning behind that was that I didn't believe that this information was critical to game play and would be more appropriate to show up after the puzzle had been completed or failed to show gains in energy, diamonds and level.
In the comp below I tried to implement the leaderboard in a way that would conserve more real estate.  In the original version of Picturiffic the leaderboard chunks had tabs that would pop up when other users made their guess.  Those tabs would also indicate whether or not their guess was right or wrong after the player made their choice.  I took this notification system and instead of using tabs I implemented a glowing effect around each leaderboard chunk which would save vertical space.  The different colors indicate whether or not the player has made a guess (blue) and whether that guess was right (green) or wrong (red) I also rearranged the elements on the leaderboard chunks so that they would still read well in their limited space.
This comp below is a redesign of the previous puzzle screens from above which implements the feedback that I received from Shiho.  It aims to clairfy the functionality of the leaderboard as well as rearrange elements like the hearts to make them seem more like an element of gameplay rather than a decoration.   


 

Finally, after a few rounds of revisions and suggestions from Brad and Shiho at Large Animal Games we settled on this home screen design for the mobile build.  It manages to make the Daily Puzzle the most prominent mode of play on the screen.  Similarly, the gameplay mode and create a puzzle mode manage to still look appealing and inviting without overwhelming the users visual field.  We also clarify the timer and make it relatable to the daily puzzle and only the daily puzzle.  Most importantly it achieves the look and feel of the original Picturiffic game.



Tuesday, November 1, 2011

Early Research into Animation Tools

When the project began, we were initially planning on working with HTML5. Since it was relatively new, we were unfamiliar with the tools required to create working animations that we could use in an HTML5 application. We found several tools that claimed to work with HTML5 animation, but most of them had little to no useable interface, and we had been hoping for some level of user-friendliness.

Two of the tools stood out from the rest; Adobe Edge and Sencha Animator. Both seemed to have a user-friendly interface and both were established brand names within the animation community, and both seemed like they could do what we needed them to.

Adobe Edge was the first tool we looked at. Since it was designed to be very similar to the familiar tool Adobe Flash, it seemed like a logical choice. Upon further inspection we found that it was limited in terms of the effects and animation capabilities available within the program, which could hinder the overall quality of our final product. In addition, Adobe Edge was still in experimental stages, meaning it could be drastically altered at any time without notice. This could've led to many headaches and possibly weeks of wasted work, so we scrapped the idea of using it as our tool of choice.

Sencha Animator was our second choice. Its interface was slightly less intuitive than Adobe Edge's, but we found that it had far more effects and animation capabilities than Adobe's product. We ran into problems when we attempted to export the animations, which could only be done via style sheets in CSS3, and we couldn't find a way to manipulate them in response to user input like what is needed for a game.

The final option was procedural animation, meaning the creation of a series of static images that would be put together into an animation by the code. This method was unfavorable for both the art and the tech team, and it meant that a lot of time for both teams would be used up dealing with optimizing or tweaking the animations after they were made. Because we didn't know how difficult it would be to optimize the performance of the animations, this method was a big question mark in terms of the amount of work required to make the game.

Early UI Comps

These are some of the first compositions of the screens that I created, mostly using existing assets. The choice to use the actual game assets rather than gray box shapes was to give a better sense of what the final design would look like. While our previous compositions had been focused on getting spacial reference for the screen size, these were exploring other equally important aspects of the UI design.


Home Screen 1

This is a menu style for the home screen that was experimenting with the facebook version's more exciting visual features. It utilizes the diagonal lines to evoke a more dynamic and energetic feeling. The color contrast between the three main sections was not exactly well implemented and needed rethinking. Overall it felt too haphazard and sloppy to be visually appealing, so major changes needed to be made.





Home Screen 2

This composition was a revamp of the previous layout. The general idea was kept, but the coloration and size differences for the three sections were tweaked to attract the player to the Daily Puzzle and Game Show sections, rather than the Create-A-Puzzle. Even with these changes, it didn't work as well as it could've. We decided this comp was not going to be used in the final product, but aspects of it may have made their way into future builds of the game.





Daily Puzzle Screen 1


This was the first of several compositions primarily focused on arranging the leaderboards in a way that would save screen space while still creating a sense of community for the players. This comp did a fine job of saving space by removing the player portraits and collapsing the leaderboard information into small tabs, but the community aspect was greatly diminished because you couldn't see the other players' faces. We didn't want to reduce the other players into numbers and letters, so this level of leaderboard reduction was scrapped.





Daily Puzzle Screen 2



These two compositions brought back the player portraits for the leaderboards, and also toyed with the placement of the options button and menu. By removing the diamonds, energy and experience counters from the top navigation bar, we had more room to show the puzzle and phrase necessary for gameplay. At the same time, the options button was shifted to the middle to see if the symmetry would be more appealing. While the button itself might've worked in the center, once it expanded into the other buttons and display it was too unwieldy to consider as a viable option. In addition, it was decided that the player portraits were too small and needed to be expanded for the final product.





Daily Puzzle 3

In this comp, the options button was returned to its original location and the player portraits were enlarged. In order to save space on the leaderboards, player names were removed. The idea was that players would be able to recognize their friends without the names, and the picture was enough to convey the sense that you were playing along with other human players. The final change from the last comp was the rethinking of the Revive hearts, Move Arrow, and Reveal Letter buttons. The old way of having a circular button next to the explanatory text was far too space-hungry to fit comfortably amongst the other screen elements. They retained their circular shape while being absorbed into a larger rectangular button along with the text. This saved space and allowed the buttons to be a more comfortable size for pressing on a mobile device. The main problem with this design was the abundance of wasted space around the buttons and the puzzle picture, which was counteracting our efforts to save space with the rest of the design.