Austin Metric

Three Suggestions for the Project Connect Sub-Corridor Survey

Project Connect (PC) created an online survey to gather community feedback on Central Austin high capacity transit evaluation criteria and sub-corridor priorities.  I am grateful that staff heard requests from Austinites for Urban Rail Action (AURA) – a group I am affiliated with – for such tools.  The existence of the survey itself is a big accomplishment. It asks the most controversial question: sub-corridor priority.  And sets up minimal barriers for participation as you can submit an unverified email address and take the survey repeatedly.

That said, there are three changes I would recommend for the existing public ‘beta’.

First, the ‘corridor matching’ dashboard at the conclusion of the survey should either be fully transparent and adjustable or the feature should be scrapped.  After one completes the survey, there is a grid of inputs that allows the user to modify one’s priority criteria.  This in turn impacts the corridor rankings that a gray box algorithm calculates for the user.

pc_surv_rec

I say ‘gray box’ because the JSON objects returned by the AJAX post triggered by the input modification give us some insight into the behind-the-scenes choices made by the PC gray box.  Here’s a comparison of weights assigned to four corridors across criteria.

pc_graybox

Some of these just don’t make sense.  For example, the congestion weight for West Austin relative to Lamar isn’t intuitive.  And that weight isn't assigned based on an actual metric such a traffic counts or ridership; it's just a coefficient. Worse, the coefficient is not explained anywhere in the survey or the links provided.  Users will walk away either not being properly informed or assuming the survey is rigged. A ‘play’ feature is great, but should give users actual control to iterate. Otherwise, these weights and feature design help tell us more about the persuasion goals of the tool designers than the iterated corridor preferences of the community.

Second, it would be useful to show the aggregate results of sub-corridor preference.  If I was a PC staffer or consultant, this sounds like a bad idea.  It would turn an engagement survey into a horserace where different groups with preferred sub-corridors would try and game the survey to make their corridor seem like the most popular.

pc_surv_corr

That said, I think there are decent options to resolve this.  Depending on the platform and developers available, it might be possible to release a new version with social authentication or email verification or simple cookie tracking to cut down on potential fraud.  This would allow real-time release of sub-corridor counts.  Alternately, the cumulative preferences could be released at a later date to prevent some ‘flame war’ escalation created by seeing the real-time number.   The important thing is that if we ask an important, controversial question of participants then the results have to be open so that there is credibility that they will influence decision-makers.

A third suggestion would be for the conclusion of the survey to explicitly layout the remaining participation schedule.  Time is running out for citizens to make their preferences clear in this process.  Providing an updated schedule of upcoming events immediately after the conclusion of the survey would be helpful.