Your browser (Internet Explorer 6) is out of date. It has known security flaws and may not display all features of this and other websites. Learn how to update your browser.
X

## Requirements behind the American Electoral System

As many of you are no doubt aware, the United States recently went through its presidential election.  I won’t pretend that any statistical analysis that I can do could come close to equaling the excellent work done by Nate Silver at his blog, Five Thirty Eight, but I can maybe add a little value by reverse-engineering the requirements behind our electoral system.  In my opinion, it makes a good case study, since a lot of the rationale and explicit requirements behind the system are in our country’s founding documents.

By design, the president and vice president are elected by Congress, and not by direct election.  Congress, on the other hand, is directly elected by its constituents.  Currently, electors are appointed by each state (and are not members of the Senate or the House, confusingly), according to the state constitution.  Electors are the folks who actually elect the president, based on the popular votes within their district.  On a state-by-state basis, elector’s votes are ”winner-take-all’, where the candidate with the majority of the popular votes typically wins all of the electors’ votes.

I say ‘typically’ because electors are not required by law or the Constitution to vote for the candidate who wins the popular vote.  Since this seems a little counter-intuitive, we should explore some of the reasoning.  Before we do that, we should get into something of a Federalist mindset.  The Federalists had a strong influence on the Constitution, and basically advocated for a distributed government – they wanted a United STATES of America, acting as a federation if independent states.  So, state’s rights were very important to the Federalists.

By indirectly electing the chief executive of the country, less populous states are intended to be more fairly represented.  A likely primary requirement is that all states have an equal say in who is elected President (and Vice President).  The legislative branch of government (Congress) is elected directly, and that’s also likely a primary requirement – that the population is proportionally represented in the legislature.  At least, this is the requirement behind the House of Representatives.  The Senate has 2 elected members per state, again to prevent a more populous state from dominating the Senate.

Interestingly, not all states require electors to vote on a winner-take-all basis (Maine and Nebraska are the exceptions to this), so the Federalist influence is still strong.  The winner-take-all scheme, however, is most likely in place to represent the popular vote.  This puts the onus back on common citizens to vote, and to campaign for their preferred candidate.  It’s about giving the populace a voice in the election.  And, in a democracy, direct or otherwise, what is a more important requirement?

## The Refrigerator Problem

After a trip to the grocery store recently, I had to clean out my fridge to make room for some of my new purchases.  I found stuff that had been sitting in there for weeks!  It got me to thinking a little bit about how to decide what to keep and what to throw away.  For instance, the carton of milk that was 3 days away from its expiration date (expiry outside the US) – should I have kept it, or thrown it away?  What about the steamed broccoli that was pushed towards the back, and is now practically frozen?

One parallel problem is a well-known optimization research problem, called the Knapsack Problem.  It was originally developed in the context of a camping trip.  Basically, the problem asks how you decide what to take in your knapsack in order to minimize weight you’re carrying (but still ensure you’re carrying everything you need).  The refrigerator problem, as I see it, looks to minimize the amount of space taken in your fridge, while keeping things that are most utilized.  I’m putting some math in this post, for a change, but let’s talk a bit about what the math means first…

I’m using the square footage of each food item as a measure of the space taken in the fridge.  The symbol I’ll use for square footage is $A_{item}$, or the area of each item in the fridge.  I’m assuming that food items fit into the height of each shelf.

Now, what I mean by ‘most utilized’ is the frequency that I’m accessing that particular item.  Some things (like milk, cheese, and veggies) are utilized daily – I try to cook as ovo-lacto-vegetarian as I can.  The frequency of use, in $\frac{1}{days}$, I’ll note by using the symbol $\nu_{item}$.

The $\emp{item}$ subscript refers to a particular item, such as milk, eggs, cheese, broccoli, or apples.  In terms of optimization, what we’d like to do is maximize total utility while minimizing the area used.  One way to approach this metric is to divide the utility by the used area – basically, utility per square inch (or square centimeter, if you don’t live in the U.S.).  Then, you can try to achieve a new goal of maximizing the utility/unit area.  This effectively reduces the number of goals that you have from 2 to 1, making it the one goal a little easier to accomplish.

So, you may ask, why does an engineer care about your refrigerator?  It’s an analogy to designing something.  A new product will have a certain set of features that it should fulfill (remember the discussion on requirements ).  Trade-offs have to be made between what requirements will help your design, and which ones are absolutely necessary.  When looked at as a model, designing a system looks a lot stocking a fridge!  It’s interesting for me to think about feature sets as a giant knapsack or refrigerator – what are other good analogies out there?

I’ll hide some math behind a jump or break, since not everyone’s interested in the more technical aspects of this.   More…

## Training on Trains

I’m sitting and writing this on my tablet device, on a Caltrain train. Of course, a geek like me thinks this is pretty cool. Part of the reason I’m excited about this is because I biked to the train station. Part of my excitement is WordPress’ plugin for mobile devices that allows me to update my blog anywhere.

Enough rambling about how cool trains and tablet devices are, though. I want to explore a little bit about incentives. In other words, why don’t more people take the train or public transportation, rather than drive to work alone? This isn’t to chide or scold; I take the train to work maybe once a month, so I’m just as guilty of ‘one person-one car’ congestion. Nobody likes traffic, and you can focus on other things (such as reading or blogging) while taking public transportation. So, why keep driving?

There are a lot of incentives to driving, basically: flexibility (you choose when you get in and when you leave), not having to deal with a crowded bus or train, shorter commute time, and visible cost. This combination is usually what keeps me in my car. I sometimes like to go swimming at lunch time for exercise. I don’t get to do that when taking the train. Also, it takes me about an hour to get to work when driving. The train adds another 20 minutes on, and there are only certain, specific trains that go where I want (unless I want to transfer to BART).

On the other hand, taking the train means I reduce my daily carbon emissions, and saves me the aggravation of driving in pointless Bay Area traffic. Plus, I can get a little work done while I’m on the train. And my daily exercise is done, so I can relax a little more at lunchtime.

Taking the train for me, and a lot of folks in the United States who don’t live near their offices, represents something of a paradigm shift. I’m keenly interested in these from an engineering standpoint, since any breakthrough innovation can represent a paradigm shift. And, like taking public transportation for most Americans, there is a barrier to adoption of the new paradigms. These new paradigms need, then, to be incentivized. In short, you need to show your adopters the value of the new paradigm over the old.

I’ll get more into this in a future post; for now, my stop is coming up. I’d like to ask you, gentle reader, of more examples of paradigm shift needing a new incentive for adoption.

## Baby Talk

I’m a parent.  That’s part of the reason it takes me so long to update this blog.  I’m not complaining or trying to make excuses, but it is a bit of an excuse.  Anyway, I’m bringing the parenting thing up for a reason.  We’re about to switch our kid from a rear-facing car seat to a front-facing car seat.  This got me to thinking about car seat designs specifically, and baby product design more broadly.

And, of course, I wanted to lead in a bit to a discussion on DfX, or “Design for X”.  In this post, I’ll focus on a couple of the DfX aspects (or “X”s) that are more necessary in baby products.  DfX is a passion of mine, though, so this won’t be the last post on this topic.

DfX focuses on the nonfunctional design of a product, often called its ‘-ilities’.  This is shorthand for things like reliability, manufacturability, affordability, serviceability, and usability.  For baby equipment, nobody wants to be a manufacturer that makes low-quality equipment.  Getting back to our previous discussion about value, this means determining what exactly high-quality equipment is.  For most child equipment, a manufacturer gets the most value through durability (similar to reliability) – customers want equipment like strollers, car seats, and bottles that last, even after repeated uses.

There are a few rules of thumb that are important to consider when reliability adds value to the system.  For instance, more moving parts to a system means that there are more opportunities for failure; even better, fewer moving parts means fewer things that light up and make noise.  The combination of a more reliable product and a less noisy one should make most parents happy.

A good design for reliability program also means that the designer should consider the use environment of the particular system.  For example, the designer of a baby bottle should consider the likely number of times that the bottle will be washed, and with what detergents, as well as what temperature of water.  The designer could also consider how the bottle gets warmed up, and the temperature that the bottle gets warmed to.  In this case, a designer could consider a baby bottle to fail if the material falls apart, if the bottle starts to leak, or if the bottle starts leaching plasticizers into a contained fluid (like milk).  Similarly, a stroller can fail catastrophically (like, if a wheel falls off) because it’s rolling over bad pavement, or it’s been used a lot and ‘wear and tear’ is leading to more fiction on the wheels.

Part of a design for reliability program ties back into use cases; if a product is used outside of its intended use cases, its reliability may be severely affected.  The designer should still take steps to ensure system reliability in these kind of ‘off-label’ uses, but some responsibility and warning should get transferred to the user.  After all, if you put the car seat in slightly incorrectly, you still want it mostly functional.

## Public Transit and Clipper

One nifty thing we have here in the Bay Area is the Clipper system.  This is basically a rechargeable card, with an RFID (Radio Frequency ID) chip on it that allows the carrier to pay fares on multiple public transit systems, without having to get a physical ticket, or carry cash.  You basically wave a card at a reader, and that reader “knows” how much to deduct from your account, based on the transit agency and distance traveled (in some cases).

It’s an interesting example of interface definition and system integration (in case you were wondering why I brought up public transit in an engineering blog).  Each fare counter at each public transit agency has to ‘talk’ with Clipper’s database, to ensure that there is enough money in the user’s account to cover their fare.  Within each transit agency, each specific fare counter has to ‘talk’ to the agency’s central database to ensure that the card can be re-used without charge in the case of a transfer.  Alternatively, the fare counter has to mark the card somehow as being valid within the transfer window.

Even specific agencies could ‘talk’ to one another, if they’ve got some kind of special promotion or discount (Muni riders could get special discounts on BART during some event days, for example).  It’s all defined by the interfaces.   Clipper’s business model (Clipper is a for-profit company) seems to be centered around selling a set of interfaces, and providing service for those interfaces, as well as expanding to new interfaces.

I’d suspect, too, that riders’ cards are an interface on their own, so that riders can still be allowed to pay if any wireless communication system is down.   The really interesting discussion here may be whether the card is an interface between the rider and a public transportation system, or between a rider’s bank account and the public transportation system.

All I know is, I don’t want to be the one testing the interface.

## The iUseCase, or how I learned to stop worrying and like Apple products

Every time I use my iPhone, I realize what a thing of beauty it is.  It’s just so… well-designed.  I’m no Apple enthusiast, and the closed nature of their systems bugs me more than a little.  Still, Apple knows how to design a certain kind of product.  My iPhone is intuitive, easy to customize, and has everything that I need in a phone and portable data device.

So, what did Apple do so right in designing its phones?  The designers of the iPhone really seem to know how its users are going to use the phone, and have designed around it.  They truly understand the customer requirements of the phones’ consumers.  And Apple understands these requirements through detailed use cases.

Use cases are basically a method of modeling how a user is going to use a product.  At their best, use cases describe most ‘common’ uses of the system.  For instance, one use case for the iPhone is ‘make a phone call’.  Another may be ‘look up an address’.  The descriptions of these use cases would describe the exact steps in each use case, which component of the system (or which external interface) is performing each step, and what the assumptions and final results should be.

Let’s look at the use case ‘make a phone call’.  Your assumptions may be that you’ve paid your service bill recently, and that you’re in an area where you can actually get coverage.  The first step would be that you (the user) would find the keypad, and push the numbers.  The end result would be a telephone connection with another user.  An alternative path for this use case would be using some method of speed dial.

If you put together enough of these use cases, patterns start to emerge about who you think your users are, and how you should design the product.  Apple’s advantage is that they truly know their market, and who is buying their products.

This is perhaps one reason that open-source alternatives to the iPhone aren’t growing quite as fast.  Generally speaking, when you make your system completely open, it’s hard to anticipate who will be using your system, and how they’ll be using it.  The open-source ideal is powerful for other purposes (and I’ll talk about that soon-ish), but you can’t really come up with an infinite set of use cases.

For me, though, I’m a pretty limited user of my phone.  I don’t want to hack it (yet), and I don’t really want to do too much mucking around.  Still, I’m curious to hear anyone else’s experiences!

## Wrapping it up, and the perfect party!

What I’ve spent the last 10 or so posts describing in detail is known as the “Vee” Model (or “V-model”) of engineering design, in particular where it overlaps with the waterfall model (described in this post).  The image below shows the Vee, in detail (image courtesy of the Project Times newsletter, http://www.projecttimes.com/).

This is why it's called a Vee

Personally, I like the Vee a little better, as its intent is to provide the designer of a system feedback at every step of the design process.    The waterfall model shown in my previous post on the Big Idea behind this blog tends to be more of a rigid, step-by-step process, with different groups ‘throwing it over the wall’ at each step (with ‘it’ being aspects of the system design).

So, on to the fun part of this blog – applying this model!  And, of course, what could be more fun than a good party?  I’m based out of San Francisco, and, if I’ve learned a few things in my years in this city, it’s that San Francisco loves a good party (with a big one recently, at least two just two weekends ago).  Parties are big public events, and need to be coordinated, just like a complex system.  My neighborhood recently had a just-for-neighbors party.

So, how do we design the perfect party?  What does ‘perfect party’ even mean?  Well, let’s start by looking at our ‘customers’.  For our party this weekend, the customers were the block’s neighbors.  There are a lot of kids on our block, so we needed to keep this party pretty family-friendly.  This means that one set of customer requirements could center around activities for kids – what kinds of activities are appropriate, how expensive or reusable they could be, how messy these activities should be, that sort of thing.  Another set of customer requirements could center around whether food or drinks should be available, and what kind of dietary restrictions neighbors have.

The design requirements could center around whether we need tables, plates, the kinds of food available (sides, salads, main courses, desserts), the ‘flavors’ of activities available to kids (games, contests, etc), and so on.  The specifications should be something like location and number of various components of the party – how many cases of drinks, how many tables, where should the tables be, and that kind of thing.

Integration is, of course, the setup for the party.  Verification and validation should be easy – throw the party, mingle, and see how the guests/neighbors like it!

## Toy Model – Verification and Validation

Our previous discussion describes a more-or-less completed system.  With the TM in an optimized, completed state, it’s time to verify its performance capabilities, both for TFT and the the technical system requirements.  Let’s pick a few technical requirements to start with our discussion of verification.

First, one sub-requirement for the supercomputer onboard the TM is “The TM’s supercomputer shall have a tracking system for TFT’s enemies”.  How do we begin to define a test to verify this requirement?  To start, we have to define TFT’s enemies – who are they?  How do we use a subset of the enemies without having to go out and capture them (we would need a TM for that)?  Also, how do we define what it means for the tracking system to function properly?

Let’s say that we can address these questions – we’ll use drones that move in patterns similar to TFT’s enemies, and use a few dozens of them, to ensure statistical significance.  We’ll then double-check the position of these drones with an external GPS measuring device, and determine just how much error in the tracking module is acceptable.  Seems simple enough.

The problems come in with the specifics (or, as they say, “the devil is in the details”).  To get several dozen drones, we need to design these drones, and that means a whole separate set of requirements and specifications.  Even with these designed, we need to ensure that our test has some level of statistical significance, and this means budget for the appropriate number of drones.  This also means some planning is necessary to determine the number of drones that can provide statistical significance.  The external GPS system to  double-check (“independently verify”) the drones’ location requires a similar design process as the drones and the TM itself.  Likely, this system would be purchased, rather than developed by the same team developing the TM.  The process for selecting these sorts of systems is often called a “Commercial Off The Shelf” (or COTS) process.  I’ll go into more detail on this in a later post, but for now, note that this is no insignificant expenditure of effort, and you often can’t just run down to Best Buy to find a system that will fit your needs.

Let’s say for now that we can verify all design requirements.  How do we then validate the customer’s input requirements?  Let’s pick the last one, “The new TM needs to hold super-beverages”.  Validation testing, in this case, requires very detailed planning.  Which super-beverages should we use?  How much is reasonable?  How well do we have to hold these beverages, and how accessible are they to the operator of the TM?  In this case, we’d actually need to rely on customer testing and reasonable expectations based on ‘normal’ TFT dimensions such as height, weight, and head size to determine comfort and ease of access to the full range of super-beverages.  As to which beverages to test, it may mean that we need to investigate historical beverage choices for TFT.  How well the TM can hold the beverage will depend on system liquid sloshing, material compatibility with the container, and range of temperatures.

Once we’ve verified and validated the TM, we are ready to fight crime and poor geometry!

## Validation – Did we build the right thing?

The system’s been built, and shown to work as the designers intended.  It’s almost time to release the system to waiting customers.  However, the task of validating the system still remains.  Validation checks the system’s function against the customer requirements, rather than the design requirements.  Basically, validation means ensuring that the right system was designed, according to the customers.

Keep in mind that the term ‘customers’ encompasses a fairly large group.  Customers are the folks who buy the system, but also various regulatory agencies and market forces or company traditions (think of the distinctive look of Apple products).  These are the ‘regulatory’ and ‘internal’ customers, respectively.  These customers’ needs must be addressed, and should be tested.

While validation is often not as technologically as difficult as verification, and doesn’t frequently need a test system, it does have other challenges.  Customer requirements should be validated through customer input; often validation is carried out through customer interviews, or usage data at customer sites.  Validation usually starts through beta testing.  Beta testing usually involves sending customers who have a good working relationship with the company an complete or almost-complete system (a ‘beta’ unit) and collecting data on the customer’s usage and satisfaction with the instrument.

Collecting this data can be a logistical challenge, as regulatory bodies often require very specific tests to be executed during validation, and this means designing a test for the system.  In addition, customers are busy people, and don’t always have time to add something else to their day, especially when they’re not getting a direct benefit from doing more work.  Customers face a real ‘what’s in it for me’ problem (often called WiiFM), and so need to be convinced to do this one extra task.

Other customers don’t like being monitored, or don’t understand the purpose or data collection methods of a particular study, and so usage data is either incomplete or difficult to interpret.  Interpretation of the data can be difficult, and so validation tests need to be designed so as to make data clearly interpretable and traceable to a customer requirement.  Automating data collection as much as possible also helps reduce ambiguity.  There will likely be some ambiguity, no matter how well-designed validation tests are.

To help both ease the customer relationship and help interpret the data, the customer relations team or marketing group will typically assist the technical staff in both designing and executing experiments.  These folks act as the customer’s advocate on the team, to ensure that the customers’ needs are truly being met.  Often, they’ll also act as ambassadors to customers, instructing any customers running validation, so that they understand the experiment and its purpose.

The word ‘validation’ often gets confused with verification, but it’s important to remember that there is a distinction, and that these are two very different kinds of testing.  That said, they often get executed at similar times, and the next toy model will use that sense of simultaneity (to keep things to a manageable size).

## Verification – Was it built right?

Apologies for the late post – I’ve been in the middle of moving, and the dust is finally starting to settle.

Now that the system is built, integrated, and optimized, we need to ensure that the system behaves as expected.  This is known formally as verification, and the intent is to answer whether the system was built ‘right’.  In this case, ‘right’ means that the system operates as intended, and was built as the designers wanted the system to look, ‘feel’, and behave.  We can assume that, at this point, basic testing of the system design has been done to ensure compliance to specifications, but our verification testing is intended to go beyond basic, component-level functionality.

In order to ensure that the system operates as intended, we need to first define the conditions under which the system is intended to operate.  These should be defined in the requirements document, either as a set of assumptions, or as explicit design requirements for the system.  Operating conditions are often referred to as the ‘intended use’ of the system.  Intended use  provides context for the extent to which the system’s operation must be verified, but should not totally restrict the verification of the system unnecessarily.  I’ll cover this in more detail in another post, but it’s important to remember that system usage should drive system verification.

With the system’s usage well-defined, requirements should be prioritized for verification.  The requirements that should be verified first are the highest-priority requirements; that is, the requirements that would prevent the system from behaving or operating as its most valuable.  Requirements that are considered ‘nonessential’ or ‘nice-to-have’ can be verified later than the essential requirements, but value to the various customers (as in the inaugural post for this blog) should be considered before ordering verification testing activities.

Verification can be executed in many ways.  Typically, one verifies system function by testing the system’s requirements and analyzing the data from these tests to show that the system fulfills the requirements.  However, systems can be verified by documentation, inspection, or modeling behavior.

When testing systems, we often need to design and build verification test units.  This implies a new system to be designed, with new system design requirements, and possibly a new set of verification tests.  Ideally, such test systems are  smaller and more easily defined than the system to be tested, in order to keep the design manageable.  Often, ‘off-the-shelf’ solutions can be purchased for verification from outside companies, with certificates of verification or analysis.  In many cases, these certificates of analysis are sufficient to ‘prove’ verification of these test systems, though the team verifying system need to understand the possible consequences of relying solely on the vendor’s analysis or verification data.

Naturally, verification can’t cover all scenarios in which a system will be used, and the tradeoff that any testing team needs to make is between completeness and reasonable timescales for testing.    So, the next time you experience a temporary glitch in any system that you’re using, think about how likely it is that the manufacturer and seller needed to test the scenario in which you’re using the sytem.