•  3 min read

Corruption, Governance and Open Offices

(Note: Last week was a bit of a mess with some work travel and other stuff going on, so I didn't have a lot of time to put together the things I found interesting. It happens.)

I haven't had a chance to watch all of the videos yet, but the TED series on the Depths of Corruption is well worth a viewing. There's an obvious focus on corruption in government, but business practices get a bit of a look-in, too.

Fred Wilson shared some interesting thoughts around the governance of software systems. He links to a post by Brad Burnham, one of the other USV partners, from way back in 2007 that very presciently talks about the rise of decentralized governance systems (using Craigslist as an example). Given USV's investment thesis, it shouldn't be a shock to anyone that this topic is top of mind for them, but it's also a fun rabbit hole to dive into and think about, especially when applied to all of the emerging blockchain-driven developments getting airtime right now.

In other "I'm not surprised by this at all" news, a study out of Harvard Business School published by The Royal Society finds that open plan offices actually decrease the amount of face-to-face interaction in the workplace:

In two intervention-based field studies of corporate headquarters transitioning to more open office spaces, we empirically examined--using digital data from advanced wearable devices and from electronic communication servers--the effect of open office architectures on employees' face-to-face, email and instant messaging (IM) interaction patterns. Contrary to common belief, the volume of face-to-face interaction decreased significantly (approx. 70%) in both cases, with an associated increase in electronic interaction. In short, rather than prompting increasingly vibrant face-to-face collaboration, open architecture appeared to trigger a natural human response to socially withdraw from officemates and interact instead over email and IM.

The full article is (as you'd expect for an academic paper) fairly dense, but Cal Newport from Georgetown University has posted the highlights in a much more accessible form.

After posting about the podcast where Anil Dash spoke about how tech companies should be held responsible for the products they make, I've been seeking out a more info on the subject, and came across this great piece: Data Violence and How Bad Engineering Choices Can Damage Society.

The Harvard researchers didn't appear to have questioned the integrity of their source data and hadn't thought through the unintended consequences of implementing a system like this. They hadn't acknowledged the problem of racial profiling or of damaging innocent people who are wrongly identified. In fact, they hadn't considered any ethical obligations at all.

When asked how the tool would be used, one of the project's computer scientists confessed he didn't know.

"I'm just an engineer," he said.

This kind of response is a cop-out, and the audience knew it.

If you have the temerity to insert your work into a political issue that, by and large, doesn't immediately affect your life, you should also be prepared to accept the consequences--or, at the very least, answer a few hard questions.

Once of the big questions that occurred to me after reading this is around the concept of 'unintended consequences': isn't it (or shouldn't it be) incumbent on the people building a product to try and work out what could go wrong, or how that product could potentially be abused? There are a lots of really smart people building this stuff, so how is it they can't seem to run through a few quick thought experiments to try and work out how their creations might be misused and at least attempt to implement some safeguards?

It's one thing for companies to take responsibility for the things they make, but people generally need to share the load when it comes to the things they do. Vanity Fair has a great interview with Tim Berners-Lee about how the web has morphed into the often (mostly?) toxic environment it has become today. While he's open about how the lack of action on the part of the web's founders has affected its development ("We demonstrated that the Web had failed instead of served humanity, as it was supposed to have done, and failed in many places," he told me. The increasing centralization of the Web, he says, has "ended up producing--with no deliberate action of the people who designed the platform--a large-scale emergent phenomenon which is anti-human.") the article also speaks to how our actions, the actions of the people that use the web, have gotten us to where we are today:

The power of the Web wasn't taken or stolen. We, collectively, by the billions, gave it away with every signed user agreement and intimate moment shared with technology. Facebook, Google, and Amazon now monopolize almost everything that happens online, from what we buy to the news we read to who we like.

Berners-Lee's new project, Solid, sounds kind of interesting (but is super rough around the edges right now). It reminds me a bit of the early days of Ello, but with a more macro approach and a much larger vision... I hope it catches on in a bigger way.

To close out this week, it would have been great if someone had been around to do this kind of analysis in my peak Mario Kart playing days.