Posts for conferences
DAC 2013 from the showfloorby Sebastien Mirolo on Fri, 14 Jun 2013
Last week the 50th Design Automation Conference (DAC2013) was held in Austin Texas and, of course, fortylines was there.
It is the first time I attended DAC. Veterans told me the showfloor was a lot smaller than it had been, maybe up to 1/5th the size. Personally, for an industry projected to reach $6B in 2014 and dominated by 3 major players (Synopsis, Cadence, Mentor), the showfloor looked of reasonable size.
When you compare to the Game Developer Conference or meetups around the thriving San Francisco startup ecosystem, the first surprise was the rather older crowd. Young faces were singularly missing - apart from the booth babes you typically find at major conferences that is to say. It looked odd there did not seem many people below 40 years old attending when you know DAC distributed free passes months in advance.
Second surprise, the French speaking contingent was there in full force. Outside Montreal, I have not been many places where you can indistinctly choose English or French to communicate with vendors. DAC2013 was one of them.
Last first-time impressions, for an industry that builds the tools that builds future technology, demos were very conservative; not much social platform integration (Few people knew about Asana), very little cloud integration, zero talk of leveraging big data analytics and no kinetic or other pushing-the-envelope human-computer interfaces. Hell, fortylines was the only company that could demo its solution on an iPad there!
The EDA Hunger Games
The discussion panel titled The EDA Hunger Games was very well animated and truly informative.
While most of software giants, following Twitter lead, have somewhat agree to see patents as deterrent defensive weapons, pretty much any of the big EDA and IC guys can and will bankrupt your company through a lawsuit if they feel threaten. Lawsuits have been talked as a real problem to foster a vibrant EDA and IC startup ecosystem.
Funding is also an issue for many EDA/IC startups. The minimum angel investment required to get off the ground is huge, in the tens of millions of dollars. Even at say $50M, that would not be so much a problem for Venture Capital Funds if the return-on-investment was not highly speculative. VCs are looking for homeruns (10x ROI) and so far the EDA/IC industry has only delivered a handful in the last thirty years. When you compare this record to social and e-commerce industries that can get off the ground with minimal capital and deliver a couple homeruns per season, you understand why VCs are not flocking to bet the house on a pure EDA/IC startup.
There was debate about financing the startup ecosystem through strategic investors. Either it is Intel, Samsung or Qualcom, they are all more than willing to make huge investments because they recognize innovation as a key component of their business. They are though public companies and as such have two major constraints. First, they cannot do equity investment otherwise the financial results of the startup will show up in their books. Second, they have to justify strategic investments to their shareholders. Right of first refusal to technology sale, life-long discounts, etc. are at the center of a negotiation with a strategic partner. It seems obvious yet many entrepreneurs make the mistake to present a VC-oriented slide deck to a strategic partner.
Last data point of the debate: there are about 100 super sales rep that know all the channels and accounts in the industry. That is a really small number. Should a startup then invest in its own sales force or go through one of the major sales middleman?
On the web and in the cloud
I had a fun discussion with an engineer at SkillCAD on using Microsoft kinetic to create wires in their visual custom IC layout tool, but for the most part I was interested in what is happening around EDA and cloud computing.
Most of the EDA vendors have made attempts to move into the cloud (see Olivier Coudert - FMCAD 2012). Talking with business developers, their IC clients are usually the ones resistant to the initiative. Protection of Intellectual Property is the number one concern on everyone's mind. Anecdotally companies in China are the most concerns about theft of IPs.ChipPath deserves kudos for an imaginative and pragmatic use of the web. They store a regularly-updated database of data-sheet information for various IPs and provide a web front-end to explore and estimate back-of-the-envelope area, power and performance. ChipArchitect will also generate some scaffolding code from what I understood. This is a pragmatic approach to build an IC marketplace while alleviating most of the concerns with storing IPs in the cloud.
Onespin solutions is another interesting company that has taken a pragmatic approach to cloud computing. Onespin formal verification toolflow is simple to understand and gets around most of the security concerns. You compile your IC design with their client software on your IT infrastructure, upload the compiled version to their service in the cloud and get back the results which are correlated back to your design through the client software. Actually in the early days of fortylines we thought to provide cloud-based verilog simulation in a similar fashion.
On the DAC2013 showfloor many companies have integrated the fear of their customers for words like public and cloud. You would thus hear a lot of multi-site thrown around. The thruth is: When you want to integrate revision control and manage large data set across sites like IC Manage does, you need some kind of sophisticated cross-continent IT infrastructure.
With two sites connected through the global Internet, you implicitly trust many agents: the cable company, the encryption software, etc. The threat model is completely different from the air-gap network, perimeter security (think Airports) you had with a single site. So what does multi-site vs. cloud really mean? From a security threat perspective: nothing. From a liability perspective: it boils down to the rule of law and who you trust.
Designing ICs in the Cloud
Cloudification always comes in three steps across all industries:
- 1. Current tools are re-engineered into a cloud-based version.
- 2. Business models evolve to various Software-as-a-Service (SaaS) models.
- 3. New applications emerge around the elastic nature of cloud infrastructures.
The cloud provides leverage and thus directly benefit Small and Medium Businesses first and foremost. SMBs are always competence starved (We are in a knowledge economy after all) and thus are often a lot better off outsourcing IT security. Using an analogy, they are better off depositing their cash at the bank rather than keeping it under the mattress.
What a SMB needs the most is customer awareness, aka marketing. IC startups are no different. Any cloud initiative must focus on building a marketplace for IC designs centered around trust management. E-Bay and Credit Card Companies are great examples of trusted third party (i.e. A and B do not trust each other but both trust C). Nothing new and special here.
Now the advantage a trusted IC marketplace is that you can test drive IPs before buying them. You can build and test your whole IC without ever seeing the code of the IP you are designing into your system. That requires various design space exploration, simulation and verification software. Fortunately many third-tier EDA vendors are struggling against the big three and are eager to find alternative revenue channels.
Every IC/EDA Small and Medium Businesses benefit from such a design-in-the-cloud marketplace. At this point, the only constraint to grow a profitable marketplace business is the health of IC/EDA SMBs. Fostering a vibrant IC/EDA startup community is a key element to that end. That's why fortylines focuses first on education and tinkerers looking for fun stuff to do using open source tools.
RapidSaas Meetup Notesby Sebastien Mirolo on Fri, 17 May 2013
One of the most compelling argument to build a SaaS (Software-as-a-Service) business is that you can do it on a dime and be profitable within a few months - No Angel, no VC, fully bootstrapped.
A famous advocate of this model is David Heinemeier Hansson of 37signals and one of the strongest believer in focusing on revenue very early on is no other than Hasan Mirjan of spheremail, a friend and client of ours.
That is the magic key: "Focus on revenue from day one" If you do not take external investment, you most likely have a few thousands dollars saved up. They go really fast.
From no idea to revenue
As sure as the three "P"s of Integrated Circuit design are Power, Power, Power, the three "S"s of a rapid SaaS are Sales, Sales, Sales.
The premise of Dane Maxwell's approach is that many markets are still organized around ad-hoc solutions where software could bring 10x productivity increase. So the process goes something like this:
- Pick a market
- Find the major pain for someone in that market
- Sale a SaaS-based solution
- Build it
A lot of serendipity goes into picking a market, yet some indicators help size up the opportunity. A market with a lot of hiring going on is growing (you can look at job postings). A market that advertises a lot is competitive. There is also extra money to spare there. Finally the more a market is profit-oriented, the easier the sale.
Once you have market, you should identify the most painful day-to-day activities. That requires both: write down a thorough list of questions and talk to as many people as possible. Most would-be entrepreneurs do not pass that step. If you are either thinking to start a business:
- Write down a script to guide a phone conversation. You are looking to identify the challenges faced by your target market at this point.
- Gather a list of one hundred people in that market. With every services present on the web Today, this is relatively easy.
- Get on the phone
The number are pretty consistent: An introduction email to 100 people will lead to roughly 30 follow-up phone calls for about 10 leads. Cold calling is hard. You will have to go out of your comfort zone. You are thus better off knowing sooner than later if you can pass the bar before you invest much time and money in your project.
Out of the many fold advantages of talking early with people in the market:
- You get validation before investing significant resources
- You narrow down on the best buyers: his/her job title, day-to-day worries, emotional bias to make decision, etc.
Sales, Sales, Sales
A minute, that's about how long an average person can hold their breath. A few days, that's how long a business can survive without cash. It is the harsh reality. Sales are creating an inflow of fresh cash and, as inhaling, they are critical to the survival of the business.
The ultimate sales machine by Chet Holmes is one of the most thorough book on the subject of sales. If you are a trained engineer turned entrepreneur like me, start from this quote from the book:
"If you truly believe that your prospect should benefit from your product or service, it's your moral obligation to help them make a decision and get on with their lives."
Then think twice about the product you are building. Start selling or go home.
Nothing puts better the tension between both teams than the differences in the recruiting process sales and engineering. Yet a successful career in both fields require pig-head determination. Engineers respect a "it's not done until it's done" attitude. They also respect dedicated attention to details. Chet Holmes in The ultimate sales machine advocates that the best performers in sales are highly prepared and determined. They devise a process, write scripts, think ahead about all objections and prepare for it. Skills are acquired through repetition.
This is the common ground on which a business leader needs to build a synergy between both sales and engineering. Why? Because aligning your sales and development cycles is critical to your company cash flow. Cash is survival. Cash is true profits.
Engineering is very much looking inner-wards to solutions. In a sense engineers are perfect buyers. Software developers are always on the outlook for new technology, new programming languages, something to give them an edge to build better products. This is the mind-set to become a great salesman: Think through the eyes of your prospect. What would you do if you were task to solve his/her problem? Would the product you are working on help?
Benefits to the client drives sales and is the basis of value-pricing. Nothing puts it better than this recent article: what a dead squirrel taught me about value pricing.
Kinetec-based businessesby Sebastien Mirolo on Wed, 1 Feb 2012
I attended a meet-up presenting some start-ups and products around the Kinetec recently. Kinetec-like devices have the potential to make accurate 3D data acquisition faster, quicker, cheaper. MatterPort shows great promise of the kind of tools available to architects in the near future. The iClone mocap plug-in has a potential to end-up in every video game studio requiring motion capture (mocap).
I also like 3gearSystems as an enabler for professionals that require precise computer interaction in constraint environments, like doctors in the middle of a surgery. I had already written about the wii and gamification as a way to teach laparoscopy procedures. Health-care is a huge consumer technologies and also a place with a disproportionate number of people that cannot write software versus those that can. From big data to games, its one of the growing field prime for technology innovators.
Startup Sales Meetupby Sebastien Mirolo on Thu, 31 Mar 2011
Yesterday I attended the Startup Sales Circle Inaugural Meetup provided by SalesTie. It was a great venue, with a lot of interesting people to meet and a very informative panel discussion moderated by Adam Rodnitzky at RelTel Technologies. That was definitely one of the most professional meetup I went to.
The panel consisted of Tony Mak, Sr. Associate at OATV, Michael Linton, VP Sales at ReFrameIt.com and George Aspland, CEO at RelTel Technologies. Coming from a software background it was really great to see them talk about focus and strategy, two words not often associated with sales people in engineering circles.
First and critical: the sales cycle. You need to know how long it takes on average to go from a potential lead to a final check. Cash flow is and remains one of the most important aspects of running a business.
The basics, competitive set, the problem solved and the technological edge have to be crystal clear every day, every meeting, every conversation. Collecting data and learning about the audience is also key to build a coherent strategy. There is always a diminishing return on CPM (cost per mille, cost per thousand views) so be sure to ask what worth is a new user to you.
For a practical standpoint, startups can be very reactive and adaptive in the sales cycle but they have to emphasis reliability at every step. It is a confidence game.
If you are about to build your brand or pitch an investor, a strategy that work well is to emphasis image and story. If you are lucky to have a charismatic person around, make him become the face of the company, going in every sales meeting, doing every interview. When you present, always tell the story from the point of view of the customer: "Imagine you are thirty years old with an expensive car and come to our web site..."
MacWorld 2012by Sebastien Mirolo on Fri, 28 Jan 2011
First time at MacWorld, that was a lot fun. Between the numerous earbuds and iPhone cases, I managed to find a couple cool products.
tappr.tv is a very cool idea and a great app to create music visuals. Beckinfield is also a very interesting and intriguing concept, this time in the area of next generation television shows. Out of the crowd, globaldelight.com caught my eye with their brain-dead easy-to-use app for post-processing video clips.
I stumbled upon solar joos; for the price it was an amazing deal to get one of their solar batteries. What stunted me even more was the live demo of square as my credit card was processed. My card was swiped through a small credit card reader plugged into a iPhone microphone jack; I signed; and later I received an receipt email with a street map of exactly where I made the purchase. Square just looks plain amazing.
VMWorld 2010by Sebastien Mirolo on Thu, 9 Sep 2010
VMWorld was happening last week in the Moscone center in San Francisco and it was definitely very crowed. Virtualization is a key technology to enable cloud computing, the quiet IT revolution, and VMware was definitely on top of its game at VMworld 2010.
There was little about virtualization itself, a lot about clouds and even more about services.
The big deal about virtualization is that it detaches workloads from the computer infrastructure and that in turns pulls everyone to see clouds in terms of services. It is thus no surprise that data centers are becoming highly automated, highly virtualized environments, being made accessible to end-users through web portals and now cloud APIs.
A fundamental change for many businesses starting to rely on clouds is the shift from a fixed to a flexible cost structure for their IT departments, the opportunity to bill each team for their own consumption and the rapid turn around to either scale-up and (as important) scale-down their infrastructure. In short, it is like moving from investing into a company-wide power plant to a per-department monthly bill. In accounting terms, it is a huge deal.
Another trend supported by clouds is the spring of B.Y.O.C. initiatives (Bring Your Own Computer). Fortylines was founded on the premise that knowledge workers should use their own hardware and favorite systems and it is good to see we are not alone. It is estimated that 72% of corporate end-points will be user owned devices by 2014. ThinApp, ThinDirect are the kind of technologies that enable to safely bring company services to such devices. Coupled with a company ThinApp store tied to a web front-end and RSS feeds, it will provide end-users the opportunity to keep up to date on demand.
Companies that leverage public and independent cloud providers are ripping a huge benefits in cost reduction. Companies that leverage private clouds might be in better control of delivering reliable performance and bandwidth to their employees. Most businesses will thus implement an hybrid cloud strategy in practice.
Deployment from development (just enough capacity for testing) to production systems (full capacity) has always been a major headache for web-based businesses. The cloud provides a huge opportunity to avoid a harsh cut-over and instead gradually ramp-up development systems into production while tearing down previous production systems.
After virtualizing machines, there is an increasing pressure to also virtualizated the routers, network, etc. I must admit that I did not fully grasp the rationale for this yet. Apparently it is an optimization to avoid going through the hypervisor and the physical network but seems to only apply for virtual machines in a single box.
If you are new to the clouds, want to impress your friends, or just avoid to get a CIO's door slammed in your face in the first five minutes, there are a few lingua franca to read about.
Virtual Machine (VM) Provisioning is definitely a basic. EC2 API, vCloud API and appCloud are amongst the API (Application Programming Interface) you will surely encounter. Typica and Dasein are a cross in-between framework and API that might also pop-up from time to time.
A virtualization stack is usually built around a Data Center, an Hypervisor, a Virtual Machine (see OVF), an Application Delivery (see VDI), a Display Protocol (see RDP) and a Client End-Point (or Access Devices). SNMP and role-based access control remain classics for server management. Storage solutions (ex. Gluster) are also a key talking point in any serious cloud conversation.
Compliance to different laws and regulations is a major part of doing business in many fields and IT is often charged to implement the required systems. So PCI DSS, ISO27005, Fisma and ENISA are most likely be thrown in at some point as well.
Aside the advantages, there was a lot of issues and challenges presented about relying on a cloud infrastructure all week long. That means as many business opportunities for the willing technology entrepreneurs and that was really exiting.
A major hurdle to use public clouds over private clouds is trust in outside services and fear of vendor lock-in. Both are usually achieved through transparent audit processes. Logging all activities (admin changes as well!) to a centralized log server are generally a first welcome step. Laws vary between states and countries and businesses need to protect their liability. So a second necessary step is to be able to track and enforce where a Virtual Machine is physically located at all time.
Entering the cloud, an IT department will suddenly struggle with license management and proliferation of images. Both issues did not exist when workload were tied to a physical computer. The IT technician job will have to change accordingly. A car analogy I heard and seems perfectly appropriate is the change from fiddling with the engine by hear in earlier years to finding the diagnostic plug and analyzing the on-screen measurements. With large cloud running in the thousands of virtual machines, user interface designers are also in for a ride on managing complexity.
The cloud provider will need to go to great length to insure Virtual Machines from one company cannot sniff the traffic of another company since both are collocated in the same data center, sometimes running on the same processor. This is known as the multi-tenancy problem.
Another problem that is bound to plague clouds if not carefully handled are rogue, pirate and unauthorized VMs consuming up resources.
Last question that seems simple but still requires careful thinking is "Who is responsible to update the OS?"
The Internet Protocol and IP addresses were meant to describe both identity and location. It is also usual that local networks are configured with default 192.168.* addresses. Ingenious engineers will be busy solving the shared addresses conflicts and performance issues that arise out of those designs in a virtualized environments.
At the heart of an hypervisor, virtualized I/O performance is a very difficult problem. As an increasing number of business applications are read-data bound, while, for example, Windows was optimized for write through output, the headaches and fine tuning will keep going on for a long time.
Operating systems assume to own all of the physical memory. Having virtualized the physical memory without the OS consent, the hypervisor will thus only see allocates for machine memory when necessary. There is a whole lot of techniques to do transparent page sharing, hypervisor swapping, etc. but the most mind blowing cleverness I have seen is "ballooning" as a way to reclaim machine memory.
Complete virtualized desktops are wonderful in that they deliver the OS and the App in a single package. It removes a lot of update and dependencies headaches. On the other hand, such packages have the side effect to get native Anti-Virus software to hang the local machine for a while when checking a downloaded >30Mb VM image and report all kinds of improper behavior (Of course, an OS is bundled in that VM and will most likely contain "suspicious" kernel code).
Voila, lots of exciting news from VMWorld this year! Clouds are making their ways into the IT infrastructure of many companies and are bound to tremendously change how businesses organize themselves and deliver value to their customers. I hope you enjoyed it.
Hot Chips 22by Sebastien Mirolo on Thu, 26 Aug 2010
I attended the Hot Chips conference from August 23rd to August 24th at Stanford University Campus in Palo Alto. The temperature outside was definitely hot, the hottest days on record as a matter of fact. Inside the chips were also running hot with the presentation of gargantuan power consumption numbers. If you are interested in the press announcements, you can look into a good coverage of the IBM Z196 and the AMD Bulldozer. I will cover here the bits and pieces that were not written down in the slides and that I found personally interesting.
Trends and Numbers
Through every talks, the theme was on System-on-Chip with only differences in the level of programmability of the different components. The difference between general purpose cores and dedicated accelerators might just be tight to the historical market of each vendor if you think at the crazy looking "XML accelerator" in the IBM z196. The AES extensions of Intel's Westmeere also picked up my attention with regards to the recent news about Intel and McAffee.
The AMD Bulldozer introduces nested page tables for the benefit of an hypervisor and the AMD Bobcat focuses on minimizing data movements, using queues instead of link lists whenever possible, in order to preserve energy.
ARM introduced virtualization features in its core and increased the memory address space to 64 bits. ARM is not providing I/O virtualization yet but definitely something about ARM and/or the "embed market" is changing.
nVidia presented its new GF100 architecture early Monday morning. The main focus was on tessellation and particularly the use of displacement maps for tessellation. A major achievement for nVidia engineers was to break through the one triangle-per-clock barrier (about 2.5 in the demo). Interestingly also, running FORTRAN code on the GPU seems an important selling point when nVidia wonders outside the 3D graphics market.
During the questions and answers, someone mentioned that they run more than 40 miles of cable in their High-Performance Computer/Center. Else I am OK with an L1 and L2. L3, you are pushing it. An L4? the IBM zSeries is definitely a monster.
Ideas with potential
While Intel was impressive in the presentation of their Tick-Tock logistic and the delivery of their Westmeere product, it is Wei Hu and Yunji Chen of the Institute of Computing Technology, Chinese Academy of Sciences that presented the most daring and ambitious development in processor architecture. The goal of the GS464V is no more than to create a High-Performance Low-Power XPU. I cannot recall what the X stands for but it is means no less than the integration of a GPU, CPU and DSP into a single core. Three key ideas in the design of that XPU are the direct links from the Vector Unit to the memory in addition to the L1 and L2 paths, the memory controller swapped out for a reconfigurable Memory Access Coprocessor and the fusion of computation and shuffling into one instruction. Numbers quoted during the talk were 100 frames per second for 1080p HD H264 decode in a single 1Ghz core. The GS464V seems to still be in the development stages at this point but if, as presented, CPU and OS is number one of the sixteen major projects (i.e. 5 to 10 billions of funding) in China, there is little doubt it will see the light of day somehow.
Mindspeed made an interesting point about cell towers. Which carrier wants to drive a truck to install brand new towers every time a new standard emerges (2G, 3G, 4G, etc.) ? To be competitive, carriers focus on the cost-per-bit and it pushes towards building cell towers out of programmable SoCs built around fine-grained VLIWs such as the Mindspeed Application Processor.
Raminda Madurawe from Tier Logic was definitely one of the most eloquent speaker while he presented the underlying ideas behind Tier Logic's FPGAs. For someone like me that do not know much about TFT transistors or metal layers, everything seemed to make a lot of sense. Apparently it was not enough though, too bad.
Though a technical presentation on FPGA acceleration, the talk on searching for gas and oil was for me a far better lesson in business opportunities. Until alternatives energy sources can massively be used, there is an increased pressure to search for deeper and smaller pockets of gas and oil. More data are thus collected by sonar boats methodically sailing the ocean, more complex computations are thus run in data centers inland, in the hope of finding those last exploitable fields. Some analysis run today already take a week to complete and more refined FWT elastic computations can be two to three orders of magnitude more intensive. The magic for a company like Maxeler Technologies is that there are about three kernels (Finite Difference, FFT and Sparse Matrices) and two measures (time and price) people searching for oil and gas care about.
If you haven't heard about Google Goggles yet, you are missing on some really cool development in search technologies. The promise behind Goggles is to take a picture of something with your cell phone and use that picture as the query. The presentation was very entertaining and the demo impressive. From a technology point of view, it was one of the only applications presented that was computation bounded, in part because of the seemingly real-time requirement for the responses and the arithmetic complexity of the underlying OCR and sift-derived algorithms.
Sometimes it is not about technology. It was interesting to see the presentation about the integrated GPU/CPU for the new Xbox 360. Engineers converted the GPU Verilog to VHDL, ran equivalence tools, formal verification etc. The rationale? They were just more familiar with VHDL.
GDC 2010 is overby Sebastien Mirolo on Sun, 14 Mar 2010
Last Game Developer Conference I attended was in 2000 and, at the time, it was located in San Jose. As the GDC is now conveniently located at a mere twenty minutes walk from my house, I decided to spend the last week walking the exhibitors floor and listening to various interesting talks.
The exhibitors floor
First fact, the dotcom wave as definitely caught-up the game industry. Social gaming sites and associated online payment providers were heavily represented. Second fact, video games still acts as a strong magnet for a wide variety of people and businesses. There was a lot of small studios, formed as LLC businesses, an incredible number of universities and pretty much all geographical areas of the globe on the floor. The turn over in video games is also still going strong and every booth was eager to pile up resumes. Third fact, even if some big names (Intel, Autodesk, etc.), had their booth, there was a lot less tools, rendering engines and "whoa!" technology providers than in my memories.
The business case for serious games is very interesting. It is based on a unique proposition. The game can be funded by institutions with a specific training need rather than traditional publishers. With a polish design and realization, the game can also double the return on investment with a release in the general market place.
There was a constant agreement that most of the development of a web-based game and that most of the content of an online game happens after launch. Both businesses are heavily metrics-driven.
I've also picked up a business book from the very friendly and welcoming Canadian booth called "Everything I needed to know about business... I learned from a Canadian". It is a smooth and refreshing read full of common sense, a definite must read fun and entertaining.
The presentation on behavior trees was an interesting tutorial on the subject. As the presentation and questions dug deeper in the details, it is obvious that behavior trees are a reactive system useful to keep consistency between decisions but that cannot be made to work to present "smart" agents without major workarounds. Thus, a behavior tree approach seems questionable outside specific use cases.
Out of the Intel-sponsored talk on GPA, it was interesting to note that Firaxis Civilization V moved to an event-driven model to service their game engine.
My personal innovative awards
The Unconcerned is based on a very interesting concept: explaining the events unfolding in Iran, the Iranian political climate and the Iranian culture through a game where a mother and father are looking for their daughter lost in the mob. With a challenging subject exploring individual, group and gender relationships, the perfect game mechanisms can only be discovered and refined through rapid prototyping and quick feedback loops. Definitely, the Unconcerned has a huge potential.
Grendel Games took the idea of building a fun game that requires the same hand motions as executed in a surgical room for laparoscopy procedures. The game in itself is made of puzzles and robots living in a fish tank. Players do not realize they are trained in laparoscopy while they are having fun. The idea and realization are just amazing.
I gleaned a few numbers worth mentioning across the different talks. Farmville had 80 million users last month. Typically 1% to 3% of visitors on a commerce website will make it all the way to a purchase. A major social game site records around 4TB of statistics per day and has around twenty people dedicated to analyze the information. There are more than 100,000 lines of dialog in "Star Wars: The old republic" and the computer generated facial animations are based on less than 100 animation building blocks to transfer facial emotions while the sentences are spoken. Most production cycles have converged towards a 3 month release cycle with 2 weeks sprint phases.
There was a lot more at GDC 2010 like the must-see keynote by Sid Meyer on Gameplay Psychology and the wonderful talk by Richard Tsao and Jean-Francois Vallee on western development and Chinese culture. If you get a chance, check out the video recording, it will be worth the time.