Tuesday, February 22, 2011

what is a game?

During class we discussed what a game was and came up with these guidelines:
-it must have a goal
-it must have rules that make players take a route towards the goal that is not the most efficient one
-the game must also have an air of triviality, if at any time it looses this it stops being a game.
-finally the game must be fun for everyone involved.

I feel that this definition for a game is close, however these qualities cause a few paradoxes. For one, because of the rule of triviality the same activity done by two different people can be both a game and not a game. For instance if I were to choose to play baseball, it would be a game, however to professionals the game becomes serious. If they do not play well, they get fired, and therefore it becomes not a game. So what happens when the activity to one person is a game and is not to another? Or what happens when over the course of the game, a person loses that sense of triviality and then regains it. Is the activity a game or is it not? Is a game a state of mind and not an activity? What then is a person who is playing a game too seriously doing?
The second paradox is the paradox of efficiency. It is true that a game's rules prohibit the most efficient method of achieving the goal of the game, however people who are playing the game are often trying to find the most efficient way to use the rules in order to achieve the game's goal. If the game is chess, and you have a possible checkmate move, every expert will advise that you make said move, because then you win the game. This is clearly the most efficient method that the rules allow. Work environments do this too. For example the goal of every business is to make money. Now the most efficient method of doing this is to just print out money, seconded closely by stealing it. However unless a person works at the mint, they do not do this. They take an inefficient route to making money. Thus unless we want to include society itself in the definition of a game, seeing as though some people take their lives trivially, we need a better definition that separates games from work.
So what should the definition of a game be? I propose that a game is any activity meant to add structure to play. This way we can differentiate games from work, due to the fact that the difference between work and play is that work is not taken with an attitude of triviality. This also differentiates play from games, as a game is play with additional structure. Finally this definition works around both paradoxes presented as efficiency is not mentioned and the game is meant to evoke triviality. Definition-wise the game does not care whether it succeeds or not.

Tuesday, February 8, 2011

When Play Becomes Serious

One would assume that since play is a relaxing state, and that we do it voluntarily, that playful competition only comes in one flavor. However, living in a world of play, I have come to realize that “casual” is not the only way to play. So what happens when competition becomes serious business? And how do we know when it is? What things change between casual competition and serious competition? And is there any blurring of the line?
In order to try and answer these questions I looked for the fastest way to make a game serious, by making it so that the outcome of the competition affects the world beyond the bubble of play. To do this I proceeded to bet with pool players in the college lounge. The times I bet I played for dollar games and for a meal pass.(if I won I'd get a guest pass, if they won they'd get $6) My first discovery from doing this was that I was a much worse pool player when something was on the line. I played a casual game with my opponent before we played a real one and found that we were of about even skill, however when it came to betting I ended 3 balls behind, a result that cannot be chalked up to luck. I could feel myself over analyzing my shots and thus making worse ones than normal. However since we had played pool before, the competition did not need an extended clarification of rules. This is because there are already a set of working rules for games at the lounge that both of us knew and had agreed upon in prior games.
The second time I attempted playing for money, we played for a dollar game. Since the stake was much less, I felt calmer, but still pretty nervous. The student had just come back from Dublin and was unfamiliar with the pool house rules. This time there was a huge clarification process before the game started .This was to make sure that everyone was playing on the same page. This would have been slightly different in a casual game, with the rules being explained as they come up instead of all at the beginning. Trying the rule as you go approach in the betting game would have led to arguments because one competitor would not be able to trust that the other was unbiased. I won the second game of pool, however I still feel I did worse than in a casual game as the game was very close.
After doing these experiments I was reminded of the games of Magic: the gathering, that I play often and found some similarities to the seriousness scale. Most casual games come with some house rules, one in particular is a “friendly mulligan”, or one free redrawing of their hand, to make sure that there are more quality games played and less one sided matches. This banks on the fact that someone in a casual game will not try to mulligan in order to get the one card they need to make the game one sided, but will work with the hand they are given if it's not terrible. However, when I was testing a deck for a tournament with a friend, we switched play styles immediately. We stopped handing out the friendly mulligans and started to use normal mulligan rules. And we used them as a resource so we could get that one card we needed to win the game. Coincidentally once we were done testing the decks we even switched them out because neither one was fun to play casually. Both would win in a tournament, but it was agreed that they were not very fun to play due to their serious nature. The weird thing here is that I consistently made better plays when the competition was serious than when it was casual, because magic rewards extended thinking. In fact I asked a nearby “impartial judge” to evaluate whether or not some plays would work before I played them. This is in contrast to casual play where I would be very likely to just assume the plays work, and wait to be corrected.
Thus when playing a serious game the players will usually play with more strict rules, think more about their plays and discuss rules beforehand, rather than as they appear. While I personally prefer casual play to serious play, I realize that some people find excitement in having a game dictate an aspect of their lives, and become very good at playing under pressure. In fact many of my friends who attend these tournaments end up making money on their hobby instead of dumping money into it. As for pool, there are people who play professionally and people who bet on it often, however I know that I am not cut out to do either.  

Monday, January 31, 2011

Interaction and interface of games




Games were made to be immersive. Through storyline, graphics, compelling characters and more they took you to a world that was not your own. However two things that are often overlooked for immersion in gaming are the interaction and the interface. The interaction between human and controller is what made the Wii the platform it is today and a good interface can make a game's setting come to life. So where have we come from in this front? And where are we going? How has interaction and interface made games what they are today? And what could they do better?
So let's start with the history of the HUD, or heads up display, a feature in most games nowadays that stores information away from the action. The HUD is what tells you how much gas your race car has, and how many more attacks it'll take to beat your opponent into oblivion. The HUD started with pong when it noted the score and has lived humbly in the background since. The HUD is the games way of talking back to us, and nowadays it's getting replaced more and more with in-game elements. This is because some TVs get burns in them when a persistent image is left on screen. So how do the HUDs of today function? They integrate the information into the section on screen. Where you used to have an ammunition counter in a first person shooter, today the ammo is listed on the gun. Instead of the hit point counter decreasing when being hit, the screen will flash red.
Today when it is necessary to have a HUD it is made to fit the flavor of the world. For instance compare the HUDs in Oblivion and Fallout 3, both 3rd person/ 1st person role playing games. The HUDs look completely different while still conveying the same information. In oblivion hit points are measured with a bright red bar, conveying blood, and the compass is the largest piece of it. In Fallout hit points are measured through ticks which lead the player to assume any hits they take are either dodged, dealt as damage to clothing or finally dealt as wounds. This is because the fallout world is more realistic, actual wounds would be much more fatal in the fallout world than in oblivion. The fallout HUD is much more streamlined, sectioning off each piece of information while the oblivion HUD is more free form. Both HUDs only display essential information, next to no numbers (besides ammunition in Fallout), and are tucked neatly out of the way where you wouldn't normally look. More detailed information can be called up with a button.
So where will the future for HUDs be? Will they speak to us and tell us the info we need? Or will they become smaller and smaller? I think that the trend will go towards less visual cues, however sound will be less used than the article implies. This is because talking HUDs would get annoying quickly. Instead we will have HUDs that provide feedback in other ways. For instance with the new motion controls it would be easy to program a game that makes you less accurate when wounds are sustained. This coupled with the screen tinting red would provide a realistic feel to taking damage. As the article implies there will be more HUD-less games in the future and more games will have the integrated elements. In the future the HUD will fade into the background being even more behind the scenes than it ever was, but enhancing games all the more.
Where interfaces will become more invisible, Interaction will come to the forefront of games. Game interaction design started out with the joystick of the Atari and arcade games, which in addition would have a button or two. These were simple, but did not feel special or very comfortable. The first leap toward making interaction part of immersion was with the flight simulator games. These games required a joystick, which made the experience. It was like flying a real plane, every aspect felt that way, mostly because of the joystick controls. If the mouse was used or even worse if ti was controlled though the w a s and d keys the game wouldn't have had half the impact it actually did. The next important leap was with controllers that were crafted to be comfortable to hold. The boxes and ovals of the NES and super Nintendo were traded out for the Playstation, Xbox and Gamecube controllers. These were vastly superior to their predecessors because you could play for longer periods of time, and the controls felt natural. This was augmented by the fact that the controllers were easy to use with all the buttons being mapped out to which fingers would press them and how the controller would be held. Finally came the Wii, Xbox Kinect and Playstation motion controllers. These interactive devices make games even more immersive. This is because they can feed back through the controller. The Wii, being the older model, is the best of the three because it has the most practice using the controls in games. Want to play golf? Swing the controller like a club. Want to play fishing? The controller can do that too. Every conceivable interaction can be programed to feel right using the Wii controls. The Kinect goes one step further, removing the controller entirely. I think that this will be one of the many possible futures for gaming interaction. The downside I see is that there is no way for the game to provide tactile feedback, something gaming has had since the rumble feature. How will we feel getting hit without a shaking controller? Or the jolt of turbulence as we try to land a plane? I feel that removing the controller removes one way for the game designers to speak to their players, and takes a whole sense out of the equation of gaming and as such is a mistake. I do however see great promise in the Emotiv. The Emotiv is a device that lets you control a computer using your thoughts, by mapping when you are thinking what. So if you think “move back” the computer will map that and the next time you have similar brain patterns the emotive will read it as you telling it to “move back” This device will be revolutionary in the production of games once it can be paired with some sort of tactile feedback. This is because there will be nothing in between you and the machine, no buttons to not hit, no awkward controls, just you and the computer. Imagine Games where you play a telepath, or a wizard brought to life and more immersive than ever. Imagine fighting games where you control the character as a puppet, and have to make the muscle movements to do the moves. Imagine being able to point to the screen and have your finger be the mouse. This is why I see the Emotiv as the future of game interaction, and of computer interaction in the next twenty years. All it takes is for developers to start seeing it's true power.






















Emotiv - Brain Computer Interface Technology. Web. 31 Jan. 2011. <http://www.emotiv.com/>.
Kelly, Martin C. "Historical Reflections on Victorian Data Processing." Viewpoints Oct. 2010. Print.
Mastin, Kirk. "Interface and Video Game Design: 7 Examples. « Through the Looking Glass." Through the Looking Glass. 9 Oct. 2008. Web. 31 Jan. 2011. <http://kmastin.wordpress.com/2008/10/09/interface-and-video-game-design-7-examples/>.
Moggridge, Bill. Designing Interactions. Cambridge, MA: MIT, 2007. Print.
Wilson, Greg. "Gamasutra - Features - Off With Their HUDs!: Rethinking the Heads-Up Display in Console Game Design." Gamasutra - The Art & Business of Making Games. Gamasutra, 3 Feb. 2006. Web. 31 Jan. 2011. <http://www.gamasutra.com/view/feature/2538/off_with_their_huds_rethinking_.php?page=1>.


Images required

Friday, November 19, 2010

effort

so now that the semester's almost over and it's super crunch time, I've been thinking about the nature of effort and how it compares to todays issues, for instance copywright law, and why there is such a problem with piracy. So normally money is supposed to represent effort in a physical form. now normally to give credit to someone who does something in the real world, you pay them with your equivilent effort. for instance I buy a ham from a butcher and pay him with the "effort" I earned through my job as an artist. normally this effort also includes a middleman's effort to get something to the consumer, the effort of a brand's head and several others in between. now the problem today is that digital goods don't require those middlemen. If I wanted to produce an album I can get it straight to my audience. This is a problem because the physical object is the same product, but it's much more expensive because of the work that went into it. As for how this relates to copywright law, we recently talked about how in order to give someone your work for free, you need to be credited with creating it. Now the reasoning for this goes back to effort. If I want to give my product to people without them having to spend their effort on it, then they need to at least acnowledge that I put effort into creating it. Why? why if I were extreemely alturistic as well as being humble would I absolutely have to be acknowledged for my work? It's because legally there needs to be some recognition of our work, there needs to be a record somewhere that we put in effort. But why is this necessary? Where does this benefit our economy? Where does this benefit our society? If someone wants to give something away compleetely why should we stop them? Because they need to proove that they have the item in the first place. If there is no record of you having it to start with how can you give it away?

Friday, October 29, 2010

How far is art allowed to go?

As of posting this I just got back from the play “the shape of things” which my girlfriend and I hated until we saw it performed (it's true you can't take a play's transcript and judge it as the play). I liked the play because it brought up a lot of questions for me as an artist. In the play the main antagonist Evelyn modifies the main character through manipulation, making him a better person physically, a different person mentally, and then shatters his life by putting his transformation on display. Now it is easy to condemn Evelyn for doing this But I like being the devil's advocate, because it makes people think. So I will present my defense of Evelyn , not because I believe morally that she is right, but because I hope it will make you think.
Making people think is the point of art, it's why we constantly try to do the unexpected, why new mediums are adopted as art and why what was thought provoking years ago will not be anymore. So when does the line get crossed for art? This is difficult because artists are supposed to cross the line. The famous quote (of whom I forget the source) is that “it's not art until it pisses someone off” and as a society we are constantly becoming more open minded. This is a good thing because it allows us to know more about our world and make educated moral decisions. However where is the REAL line? Can you mess up someone's life forever in the name of art? Think of all the thought that the action alone will provoke in the person and all who view the work. Can you kill in the name of art? Can you condemn your or someone else's “soul” by proving that their entire life is built on ever moving sand, and then show them that there is a rock beneath? What if it makes people think? What if it causes epiphanies everywhere? It's easy to say that it's not worth it, but how many lives have we sacrificed in the name of science for the same purpose, or for politics? Can there be acceptable losses for art?
To complicate matters even further, what if these losses are a risk? What if there's a chance someone or something will be killed? A real life example of that is the piece by “Guillermo Vargas” an unpopular man with pita and other animal lovers because he starved a dog as art. However what if his purpose was to explore whether or not humans would do the right thing in the face of legal action? What if the exhibition was the piece? What if he hoped that someone would ignore his wishes, get up, dodge the guards and feed the dog? Even with the obvious consequences? People all signed petitions, hell I did too, but facebook petitions obviously do nothing, when anyone at the thing could have actually done something. And did it not make us think? Was the poor dog's potential death (it happened it's over, it died, so not really potential) worth it? I ask you this?

Friday, October 8, 2010

Pieces of the Puzzle

After reading works like cult of the amateur, wisdom of the crowds and
the tipping point, and learning about the history of media up until
this point I'm only seeing some of the puzzle pieces. I understand now
that media is directed by people, not by events. That the internet did
not become a cultural landmark upon inception, it became a landmark
after one billion people used it. This is because the telegraph, TV
and radio were the same way. It takes the crowd to deem whether an
invention is worthwhile. The car was invented long before Ford, but
people don't just snap up a good idea, they wait to pick the right
idea. It wasn't until we got the combination of the right fuel type
(like it or not we chose gas over electricity) the right manufacturing
method and the right price. The same goes for new media. we tried Myspace, and Friendster, and people dropped them to cling to Facebook.
What I don't understand is this idea of clinging to the old, when
things no longer work. Cult of the amateur had some good criticisms of
web 2.0, namely that we don't appreciate experts and we are sciphoning
money out of the economy by trusting these amatuers for free. However
he seems to want to cling to the past, and is mournful that these jobs
are going away. While I am incredibly sympathetic in these regards tot
he people who are no longer able to make money off of advertising
because of Craigslist, or cannot write news stories because bloggers
will do it for free,  I say, innovate. People are tenacious creatures,
and when the going gets tough the tough will find a way to forge a
living. Maybe it's time for tv to loose it's chunk of pie to online
video. maybe we need "web critics " that will tell us who is an expert
on line and who is  not worth following. Maybe we need to create more
value, as Rich Nadworny was so adamant about. If people are doing
things for free, then we need to take it as a blessing because we can
then find other ways of creating value.
       This in't to say that I see the system as being fine as it is. There
is still a lot of crap online and wading through what is valuable and
what is not is hard work. and even if we were to have an online critic
to do it for us, so we can see what is worth our attention, how would
we pay them? The other problem is that we are making mental labor less
valuable. I'm all for automation and having computers do tasks that I
know humans shouldn't need to perform, but news and music and culture
is our THING. That's what we will always do better than computers, so
why are these things being devalued? It's because these are the things
people want to do. People will always want to make art and music in
their free time, and people will always love spreading the news to
anyone who will listen. But if people will naturally do this, then why
should we move away from it? Shouldn't we be working to create a world
where everyone enjoys going to work? In an ideal society wouldn't we
have everything taken care of already and have people free to create
culture? And shouldn't this be getting easier with computers being
able to perform ore and more jobs that people do not want to do?
Hopefully I will find these answers as I continue this semester, and
hopefully I will be able to move humanity to do what it truly does
best. Imagine.

Friday, September 17, 2010

jumping in head first

The first three weeks of the program have been challenging to say the least. I quit my part time job so I could focus, and I constantly feel the need to prove myself. There is a lot of work to be done each week, including a book, some other articles, web work, a blog that requires maintenace, and art that needs creating. but thankfully it is all interesting work. I have no doubt though that this will be a trial by fire, and that I will succeed it.  so far we've had a great deal of theory work, with talks about how trends get started and how people work in groups, which is very important to doing any work meant to be viral. we are also learning the basics of Photoshop and Dreamweaver, two programs I'm pretty confident in. But I'm sure that i'll be learning some new software soon, along with more of the theory that I find so intriguing. so bring it on , I will definitely rise to the occasion.