Thursday, November 17, 2011

Visualizing Software Quality - Metrics & Views

A recurring theme in presentations at QCon SF 2011 is the use of visualization systems. This morning, I attended a presentation entitled Software Quality – You know It When You See It. It was presented by Eric Doernenburg from ThoughtWorks and will most definitely appeal to all the software QA buffs out there. This topic has a lot of content so I will spread my notes across multiple posts. Eric first posed the following question:
"How can you see software quality or non-quality?"
Well, as it turns out, there are many ways this can be done. Simply put, the steps are fairly simple and straightforward:

  1. Collect metrics
  2. Aggregate data
  3. Render graphics

In this post series, I will cover all three of these steps by presenting examples of available tools used to  visualize software quality however, before we can do that, we must observe the software and firstly collect a few metrics.

Types of Metrics
What types of metrics exist which can be used to me sure software quality? We have the lines of code (LoC), method length, class size, cyclomatic complexity, weighted method per class, coupling between objects or classes, amount of duplication, check-in count, testing coverage, testability and test-to-code ratio only to name a few. Data collection can be mostly automated and tools exist out there to do this. For examples, research tools such as iPlasma (see reference paper) provide such capability for extracting a range of different software metrics. This is the first step in producing a visual representation but before going into more detail on visualization tools, lets now take a look a some viewpoint metaphors on software quality which present the problem of "what" should be observed.

The 30000 ft Level View
At this height (or at any arbitrarily greater height) we have a view on software that is very macro. We see component diagrams, high level architecture, compositions, deployment diagrams, etc... Not very useful from a quality perspective because we don't have enough detail to properly measure the quality of the software being observed; A lot of abstraction layers exist between these high level views and the actual system. Also, it is very common that what the architect dreamt up versus the actual production code are two very separate beasts.

The Ground Level View
At this level, one can spot and address many quality issues at the line-of-code level. This might seem relevant at first but consider this level might be a little too micro to be efficient. Visualizing quality at the line-of-code level can easily overwhelm us with too much detail.

The 10000 ft Level View
Think of this view a being just at the "right level". At 10000 feet, there is not too much detail and the observable components have enough detail which make them prime for quality visualization.

to be continued in another post...

Mike Lee on Product Engineering

This is a follow-up post on last night's keynote. If you have about 47 minutes to spare, I highly recommend you have a look at the following presentation (link) on InfoQ given by Mike Lee at the Strange Loops conference on November 10, 2011. We had the encore at QCon SF last night. Enjoy!


Things I Wish I'd Known

Rod Johnson (background info) of SpringSource discusses some things he wish he'd known when he ventured on the business (dark) side of things with his start up.

Rod Johnson

Most software developers are quite good with technology; for this reason, Rod's keynote focused more on business than technology. In business, especially in start ups, there are some intense highs (e.g. financial success, changing the word) but equally painful lows (e.g. layoffs, not taking a salary, years of obsessive).

So, if your planning to do a start-up, you need to ask yourself: where is the opportunity? Remember that technology must come first! There must be an opportunity for disruption. Where is there a market gap? Did everyone else fail to see something? Did someone else screw up?

Sometime, the question are complicated and the answers are simple. Try to prove yourself wrong. Be your toughest critic. Why is there a business here? Is it less painful to abandon a thought experiment than a business? There will always be a better idea.

Do NOT start writing code... it's addictive if you're any good at it.

Once you've validated your business idea, believe it! Vision must matter to you at an emotional level. Align you team on the vision. Unless someone like you cares a whole awful lot, nothing is going to get better. Just think of Steve Jobs. Are you prepared for success or failure? Success means years of obsession which has a big impact on family, friends and hobbies. You lifestyle can be reduced compared to a normal job. And what about failure? There are ranges of outcomes. At best, improving a personal brand or at worst, significant financial impact and wasted years.

Being an entrepreneur means a lot of risk. Many successful entrepreneurs have repeated failures but have bounced back. You can and will get things wrong. Some you can get away with for awhile since you can't get everything right the first time. So long as you are ready to change, this will help you. Some things, however, you cannot get wrong such as legal aspects of a business.

A great team makes a great company; This is what investors are looking for. Building a team is about complementary skills. First, you need to understand yourself and know what things you are good at and what things you suck at. Assemble individuals who possess the required skills and agree on your vision and ambition. Remember that you cannot afford to have different goals. In essence:

"We’re all a little weird. And life is weird. And when we find someone whose weirdness is compatible with ours, we join up with them and fall into mutually satisfying weirdness—and call it love—true love." - Dr. Seuss

On investment, if you're starting a software company, you probably think you need more money than you think. A great investor will lift you to another level. A poor investor won't help and will exploit you. You will be married to your investors so choose wisely. All investors are not equal. A good trick to pick out good investors is due diligence. Find out a bit more on the person you want to do business with before signing a the contract.

Finally, a few mistakes you shouldn't do:

  • Don't piss people off, it will come back to bite you
  • Don't let customers drive your product roadmap


Cloud-Powered Continuous Integration and Deployment

The speaker comes from Amazon and wants to give us advice on Continuous Integration (CI) and Continuous Delivery (CD) - in the cloud. In the end, I only participated in the first half of his presentation that contained CI basic stuff. When it would have been interesting, the speaker started to be really Amazon EC2 centric... not moch to retain from this session, maybe except the Poka yo-ke technique (see below).

Presentation slides here.

At amazon, the Customer is the center of their universe
Old way was to flow from Requirements to Development, Check-in to Testing and QA to Release. Most of the learning from customers is at the end at the Release phase and a little bit on Requirement phase but pretty much none the middle phases so to learn faster, the goal is to make the inner steps smaller and apply CI, CD and Continuous Optimizations

CI
  • Goal: to have a working state of the code at any point in time
  • Benefit: fix bug earlier when they are cheaper to fix
  • Metric: a new guy can checkout and compile on his first day on the job
Poka yo-ke technique
See here: A poka-yoke is any mechanism in a lean manufacturing process that helps an equipment operator avoid (yokeru) mistakes (poka). Its purpose is to eliminate product defects by preventing, correcting, or drawing attention to human errors as they occur.
Ex: ATM machines with card swipers designed so you keep you card at hand instead of those that "eat" your card offering you a way to possibly forget to take it back when you leave the ATM.

Basic lessons
  1. Keep absolutely everything in version control (scripts, etc.)
  2. Commit early, commit often
  3. Always checkin to trunk, avoid branching
  4. take responsibility for check ins breaking the build
  5. automate the build, test, deploy process
  6. be prepared to stop the mainline when breaking occurs
  7. only one way to deploy and everybody uses the same way
  8. be prepared to revert to previous versions
Cloud
VCS systems in the cloud, CI servers in the cloud, distributed build in the cloud
They use Jenkins, Team city, cruise control, bamboo

Uptime in High Volume Systems - Lessons Learned

Excellent presentation from Urban Airship (UA) - mainly, what struck me is the way they seem to be Agile and Lean in the real sense of the terms, and at all levels of their company, from Engineering to Operations, etc. The guy had way too many slides for an hour however... here's the overview (more details at first, then, less and less as the speaker accelerated... I will put the slides below for those interested in more details!):


Presentation slides here.

About Urban Airship

  • Hosting for mobile services
  • Unified API for services across platforms
  • Content delivery at all scale
  • SLAs for throughput, latency
  • Apple, Android, RIM

UA is a Lean Company

  • Specifically Lean startup
  • From wikipedia: Use of FOSS and employment of Agile Techniques; is a "ferocious customer-centric rapid iteration company"
  • Attention to Continuous Improvement
  • Value the elimination of waste
  • Transparent, open processes
  • Does not apply to just Engineering but also to Operations

UA by numbers

  • more than 20K developers
  • 300 millions active applications installs use our APIs across
  • more than 170 millions unique devices
  • 10s of billions of API requests per month
  • 10 millions direct socket connections to our servers
  • more than 50TB worth of analytic data
  • 30 software engineers, 5 operations engineers

Obligatory Architecture slide
See slides that will be posted soon
3-tier architecture (using Apache Cassandra, PostgreSQL, Java, Python, HDFS, Hbase big user (like Facebook))

Architecture - General principles

  • Keep everyone moving in the same direction
  • Help discrete teams understand how they interact
  • Think in terms of small discrete services
  • Continuous capacity planning based on real data
  • Avoid local optimization decision making

Architecture - Services
Trending towards a service based architecture
Critical traits of a services

  • Minimal exposed functionality (Smallest reasonable surface area to the API and Operate on one type of data and do it well)
  • Simple to operate
  • Over exposure of metrics and stats
  • Discoverable via ZooKeeper (future)
  • Zero visibility into inner workings of other services
  • No shared storage mechanisms across services (services are completely fronting their datasets) - Motivation for this: security, performance, scalability
  • Minimize shared state - use ZooKeeper if absolutely necessary
  • Consistent logging and configuration properties
  • Consistent implementation idioms
  • Consistent message passing
  • Convention for on-disk layout and structure (directory structure is standard on all their nodes)


Architecture waste reduction


  • All back-end services are in Java and Python
  • All Java services are made to use a single set of operational scripts
  • Always looking for new ways to eliminate waste
  • Architectural waste comes in many forms (lots of data storage engines - postgreSQL, MongoDB, Cassandra, Hbase, using a complex, unfamiliar queuing system was wasteful, large diversity in approaches for managing services, worker processes, process management, etc.
  • Developer silos - avoid the bus factor - they always have at least 2-3 persons per service (and they currently have 35-40 services)

Architecture - fault domains
essentially, they worked hard to ensure that when 2 resources are completely unrelated, they should are isolated fault domains if at all possible.
Engineering at UA

  • 46% of the time they develop new features
  • 28% spent sustaining internal support
  • 21% production support
  • 2% - social stuff (beer , ping-pong)

Engineering for iteration

  • Team of about 30 engineers
  • Small teams organized around functional area (DB guys, etc.)
  • Shortest iterations possible - Lean MVP concept (Minimum Viable Product)
  • No formal QA team (!) - and they seem to be happy with giving this responsibility to all their people.
  • Frequently pairing, but not mandatory - this is the choice of the developers themselves
  • They always leave code better than how they found it
  • All bug fixed requires a code review in a review board
  • Large new developments require a sit-down code/design review (a little more formal, but not that much)

Engineering for automation
3 level of testing (Unit, Functional (with mocks)and Integration). Commits are done to a single main git branch


Engineering for simplicity
They simplified metrics and stats capture and so they can do it everywhere!
They capture latency for Service critical operations and External service invocation
They capture counters for Service critical operations and Services faults

Engineering for Operations

  • Tests and Deployment scripts are done within the development teams (their definition of done essentially includes the deployment automation).
  • Services deployments done via automation tools and
  • Automation scripts always pull from prod git branch after passing auto and manual tests
  • They apply the "Put the mechanics on the helicopter" principles
Engineering for Responsiveness
  • Low latency, high throughput message paths using an in-house developed RPC system based on Netty and Google PBs.
  • They support sync and async clients, journaling of messages
  • Latency tolerant message paths using Kafka for pub-sub messaging (they generally favor pub-sub model)
Engineering for Availability
  • They use dark launch (like Facebook) which essentially is a roll out of a new functionality to a subset of customers
  • Take new service in or out of prod with no customers impact (Double writes, single reads, migration , cutover, Load balanced http with blended traffic to new and old service
  • Their service abstraction helps immensely
  • Requires extra discipline for co-existing versions or services
Engineering for Continuous Improvement
They use the 5 whys approach
Operations at UA
A team of 5 operations engineers handling more than 100 servers (Mostly bare metal, using EC2 for surge capacity)

Operations for Transparency
They measure absolutely everything, and monitor only the important things

See the slides here: TBD

Retrospectives, Who Needs Them, Anyway?

Book references:
Essentially it is a chance to reflect and learn (historically called postmortem made to learn from failures)

Norman Keirth (Project Retrospectives: A Handbook for Teams Reviews author) Prime Directive
"regardless of what we discover, we must understand and truly believe that everyone did their best job he or she could, given what was known at the time, his/her skills and abilities, the resources available, and the situation at hand."

What happens during retros - the 5 steps
  1. Set the stage - getting ready (ex: create safety, I'm too busy)
  2. Gather data - the past (ex: artifacts contest, Timeline)
  3. Generate insigths - now (ex: fish-bone, 5 whys, patterns and shifts)
  4. Decide what to do - the future (ex: Making the magic happen, SMART goals)
  5. Closing the retro - retro sumary (ex: what helped, what hindered, delta)
Stories from the trenches
  • We should never impose retros on people not wanting retros... it never works!
  • There is a difference between retros and discussions at the cafeteria: for retros, you have (or need!) shared commitment to find solutions.
  • Serves to celebrate good things as well.
  • Serves as points to evaluate where we are going
Esther Derby's (Agile Retrospectives author) findings
retros can be :
  • boring
  • painful (ex: finger-pointing) to the point people want to stop them
those 2 issues come from a lack of :
  • focus
  • participation (thus the need to set the stage properly)
  • genuine insight (you can't only do start/stop/cont because they only show symptoms... what is the root cause behind the symptoms)
  • buy-in (takes a genuine facilitator - never the team leader - has to be neutral to the team, ideally from another team)
  • follow-through (followup is key otherwise, this is a lost of time)
More stories
  • 30 minutes is the shortest retros possible (for a team that knows exactly what to do) - typically, minimum of an hour.
  • Finger pointing is a retrospective killer... a key to get out of this pattern is to change subject
  • Ground rules are really useful to put constraints on how retros are conducted (no naming no blaming rule for example - interpersonal issues shall not be addressed in retros but by management in parallel)
No naming, no blaming!
people, like kids, respond in much better ways to positive comments than negative comments. Negative comments leads to bad team work and eventually bad results! See the Prime directive again and how it actually helps with this essential ground rules.

Identify changes
It is not enough to reflect, you need to take actions to change things. Use SMART goal and name a responsible to follow up on each goal:
Specific
Measurable
Attainable
Relevant
Timely

You start the next retros by asking how we are doing with regards to previously set SMART goals.

Don't "sell" retros, instead
sell a way of learning how to:
  • avoid repeated mistakes
  • identify and share success
Take away:
If you leave out a step in retrospectives, you will make problems emerge that you do not understand fully nor find solutions to.

And It All Went Horribly Wrong: Debugging Production Systems

Intro: Maurice Wilkes quote:
"As soon as we started programming, we found out to our surprise that it wasn't as easy to get programs right as we had thought. Debugging had to be discovered. I can remember the exact instant when I realized that a large part of my life from then on was going to be spent in finding mistakes in my own programs."

Presentation slides here.

Debugging through the ages :
Production systems are more complicated (abstraction, componeurization, etc.) and less debuggable! When something goes wrong, it is all opaque and more and more difficult to fix...

How have we made it this far?
  • we architect ourselves to survive component failure
  • we forced ourselves to stateless tiers
  • when there are states, we considered semantics (ACID, BASE) to increase availability
  • redundant systems
  • clouds (especially unreliable ones, like Amazon) have expended the architectural imperative to survive data-center failure
Do we still need to care about failure?

single component failures still has significant costs (both economic and run-time)... but most dangerously, a single component failure puts the global system in a more vulnerable mode where further failures is more likely to happen... This is a cascading failure - and this is what induces failures in mature and reliable systems.

Cascaded failure example 1:

An example of a bridge that collapsed in Tampa Bay because of a boat with ballasts full (which was not supposed to happen) hit the bridge (which was not supposed to happen either) that had been built by a crooked contractor playing with sand/cement ratio in concrete to save money (which was not supposed to happen either - after all, this is Florida, not Quebec!). In the end, it took all those "unlikely" events to all occur to cause the bridge to fall.

Wait, it gets worse
  • this assumes that the failure is fail stop
  • if failure is transient a single component failure can alone induce system failure
  • monitoring attempts to get at this by establishing liveness criteria for the system - and allowing operator to turn transient failure into fatal failure...
  • ... but if monitoring becomes too sophisticated or invasive, it risks becoming so complicated as to compound failure.
Cascaded failure example 2:
An image of 737 rudder PCU schematic (details here). Another example of a cascaded failure that led to B737 landing issues.

Debugging in the modern era
- Failure - even or a single component - erodes oeverall reliability system
- When a single component fails, we need to understand why and fix it

Debugging fatal component failure
  • when a component fails fatally, its state is static and invalid
  • by saving the state, to stable storage, in DRAM for example, the component can be debugged postmortem
  • one starts with the invalid states and proceeds backward to find the transition from a valid state to an invalid one
  • this technique is old: core dumps
Postmortem advantages
  • no run-time systems overhead
  • debugging can occur anytime, in parallel of the production system
  • tooling can be very rich since the overhead caused to a run-time system is not a problem
Cascaded failure example 3:
Flight Data Recorder from Air France crash (747) found 1.5 years after the crash
This recovery definitely permitted postmortem analysis

Postmortem challenges
  • need the mechanism for saving state on failure (ex: core dumps)
  • must record sufficient state (program text + program data)
  • need sufficient state present in DRAM to allow for debugging
  • must manage state such that storage is not overrun by a repeatedly pathological system
Conslusion:
these challenges are real but surmountable - as in some open source systems presented below (MDB, node.js, DTrace, etc.)

Postmortem debugging: MDB
  • a debugger in illumos OS (solaris derivative)
  • extensible with custom debugger module
  • well advanced for native code but much less for dynamic envirnoments such as Java, Python, Ruby, JS, Erlang...
  • if components going into infrastructures are developed using those languages it is critical that they support postmortem debugging.
Postmortem debugging: node.js
  • not really interesting for non JAVA...
  • debugging a dynamic environment requires a high degree of VM specificity in the debugger...
  • see all details on dtrace.org/.../nodejs-v8-postmortem-debugging
Debugging transient component failure
  • fatal failures, despite its violence, can be root-caused from a single failure
  • non fatal failure, it is more difficult to compensate for and debug

Reliability Engineering Matters, Except When It Doesn't

A presentation on Reliability Engineering (RE) and how it can be used for software systems. Quite a confusing presentation doing a full circle from
  1. RE is good for many domains including software to
  2. RE is very complex for software systems to
  3. But it is still a tool that you can use, not just an absolute one...
Presentation content:
This book written by the presenter was introduced by the host as an excellent book on the subject: "Release It!" by Michael T. Nygard

Presentation slides here.

Context and example
Reliability Engineering (RE) is a lot about maths so the speaker tried to present concrete examples. What if the hotel bar lighting rack falls on happy drinkers (ex: Pascal)? We could analyse the supporting chain and everything using static & mechanics principles. This is fine, but this abstracts other possibilities affecting the reliability such as earthquakes, beam wearing out, drunk people hanging on it occasionally, etc. or we can analyse this from another perspective: The rack held properly yesterday, today is a lot like yesterday, so the rack will be OK today again... but the point is that it will for sure eventually break and this is what RE is about.

RE Maths
The presenter went through many mathematical models with hazards equations, fault density, etc. Those came from many disciplines from which RE software inspires itself, and in which the probability of failures augments with time; but software do not really wear out... so we cannot simply apply blindly those models.

By taking a single server example and then a multiple servers example, he presented the Reliability Graphs (http://en.wikipedia.org/wiki/Reliability_block_diagram) where there is a start node and an end node and everything in between are successful reliability paths. A single path is subject to a global system failure if any subsystem fails.

3 Types of failures
  1. Independent failure: Failure of one unit does not make another unit more likely to fail (excellent! but is that true for software systems? not likely)
  2. Correlated failure: Failure of one unit makes another unit more likely to fail.
  3. Common mode failure: Something else, external to the measured system, makes 2 redundant units likely to fail (ex: redundant LEDs in an Apollo capsule that were both subject to overheating)
Important Note for RE when applied to software systems:
Lots of software reliability analysis make the error of assuming a perfect independence between duplicated resources. The 2 last types of failures are way more common.

Blabla you should probably skip unless you know Andrey Markov (the mathematician)...
The speaker then went through more real-life examples, mainly to lead us to the various pitfalls of formal analysis with regards to software reliability. He showed that because of Load Balacing algorithms for example, redundant systems were not independent and that a failure on one was certainly augmenting the probability of failure on other ones (receiving the load of the failed ones in this example). Also, if you run a system with 9 servers, it is certainly because you need most of them to be alive for the system to work (otherwise, you overbuilt your system...); he introduced the concept of what is the minimum number of systems required to be alive out of the total number of systems deployed. Interestingly, one of the pitfall in increasing reliability seems to be that the fail over mechanisms tend to bring with them their own failure paths... In the end, when he used quite a small system for an example (maybe 5 machines) and put in the "invisible" systems from the logical system block diagram (ex: switches, routers, hard drives, etc.) in the probability of failure equation, the result was no less than spectacular (I can't write this equation down!). But it certainly proved again that true independence is not existing most often in software systems. Also, unlike the famous constructions analogies, a failed software system can come back up. To handle that aspect, he introduced the work of this mathematician, Andrey Markov and its Markov model (essentially state diagrams with probabilities of change states on each arcs). Markov systems are awfully complex for the simplest systems...

Limitations of RE with regards to software systems:
  1. Intractable math (only few systems have closed forms distributions like gaussian - more often they have exponential (ex: a single server), log normal (ex: multiplicative failures) or weibull (ex: hardware) distributions). There is also the "repair" distributions that could be modeled with a Poisson distribution. But in the end, which distributions do we apply to software systems? None: Software fails based on load, not on time (as opposed to typical engineering disciplines)
  2. Curse of Dimensionnality (you should have seen the Markov model for the simple system)
  3. Mean Time Between Failure (MTBF) is Bullshit! Note: See Google analysis a few years back on hard drive failures. Would be good to get this data for other devices (servers)

other big killers:
  1. human error (50-65% of all outages!)
  2. interiority
  3. distributed failure modes
  4. Lack of independence between nodes and layers
So in the end, should we abandon Reliability Engineering for software system?
NO! Even if RE cannot tell you when your system is OK, it can tell you whenit is not. Modeling reduces uncertainty. Use RE like other model and apply it when your system is at risk.

Reference back to the ambivalent presentation title "Reliability Engineering Matters ... Except When It Doesn't".

Thursday's Plan

Here is today's plan following the keynote. Don't hesitate to give your feedback or start a discussion in the comments.

Myself:
  • Software Quality - You Know it When You See It - link
  • Lean Startup: Why It Rocks Far More Than Agile Development - link
  • Experiences with Architecture Governance of Large Industrial Software - link
  • The DevOps Approach to Performance - link
  • Dealing With Performanc Challenges: Optimized Serialization Techniques - link

Steve 2.0:
  • Reliability Engineering Matters, Except When It Doesn't - link
  • And It All Went Horribly Wrong: Debugging Production Systems - link
  • Retrospectives, Who Needs Them, Anyway? - link
  • Uptime in High Volume Systems - Lessons Learned - link
  • Cloud-Powered Continuous Integration and Deployment - link

Impressions of the conference so far...

Ok, day 1 of the conference is done. Now, what can be said about the experience up until now? Well, for one, I think QCon SF is awesome. The organization of such an event can be difficult as there are many people, many speakers, lots of coordination and a bunch of areas where slip-ups can occur but Trifork and InfoQ have got everything under control. The venue layout minimizes walking distance between presentations and the hotel staff is very efficient and courteous, unlike the cable car operators. So, organization-wise, I have no complaints; We are in very good hands.

On the (slightly) down side, as with any such event, no matter how well you pre-plan on which topics you will cover you, will eventually end-up in a presentation that just sucks. Well… maybe the word is a little strong but, lets instead say that some of the presentation content was not in line with the expected return value inferred from the provided abstracts :) This is when having a pre-determined backup presentation plan becomes very handy.

On the brighter side, the organizers have conjured a very innovative way to pick-up feedback after each presentation. They post people at each exit holding one or two iPods with a simple and very intuitive user interface to provide feedback. The screen is divided into three zones with a smily :( , :|  or :) associated to red, yellow and green; I'll let you guess which is which. Simply brilliant! Like it? Just touch the happy green smily. Hate it? Touch the red frowny one.

Presentation Rating System

Now, what are two things I take away from this first day at the QCon SF conference?

  1. To my surprise, I think I have created a blogging monster. Let call him "Steve 2.0". Coming here, we had a choice of documentation mechanisms to bring back information to the mothership: produce a standard (i.e. boring, static, non-interactive) conference report or use a blogging platform that would allow us to take notes and publish content as the day went on. Option B was taken. I'm only hoping that "Steve 2.0" is not a transient state and that he will transpose his newly learned skills to another platform (you know which one…) after all this is over.
  2. Mike Lee is an awesome presenter. Going through a somewhat tiresome day of active listening, idea exchanging and intensive note taking was well worth the trouble for this guy's keynote. If you do not know who Mike Lee is, here is a little background information. His keynote on Product Engineering was done wearing a mariachi suit which fitted this guy's personality and on-stage presence very well. I wish I could show you the video of it but as a consolation prize [Update: Here's a video on InfoQ of the same presentation given at the Strange Loop conference], I will link you to his presentation slides

Mike Lee on Product Engineering

So, one down and two more to go. We will be posting the expected attendance schedule for day 2 very soon, so keep reading and don't forget to comment on our posts. We'll definitely take the time to answer any questions you may have.