So Apple has announced a fancy new phone, but it’s not as fancy as people wanted it to be.
From this BBC story, marketing jagoff Gregory Roekens:
But in terms of style, it was underwhelming. People were expecting iPhone 5, but instead it’s almost fixing the weaknesses the previous phones had.
I’m glad that we can all agree that the way forward for tech companies is to follow the worst possible practices in order to impress morons and marketers. Also, Siri is. . . a pretty crazy new feature, and if you’re pshing it because you want a fancier case, you’re a fucking idiot (again.)
So while we were at Pix, following a lovely dinner at Bete Lukas, Matthew and I got into a little. . . discussion, as it were, about online education, which was precipitated by his wife telling us that her marketing professor had said, no doubt in the haughty tone of all morons who seek to reduce their betters, that in the future all PHDs would work in customer service, and all education would be online. The discussion didn’t really go anywhere, because Matthew wanted to talk about the ideal learning conditions for fantasy autodidacts, and I warned against the inevitable future in which teaching the children of poor parents would get you arrested. It was fun, but not very enlightening.
In the car on the way home, C (who is distinguished in this context by the fact that she cares more about how real people learn than she does about how hypothetical constructs do the same) said she had to not listen very closely to us, because the conversation made her sad. I had responded glibly to the quoted comment because it is, to me, patently idiotic. The problem is, in America we can do longer laugh off the patently idiotic, especially as it relates to education. As American’s increasingly find it more important to make sure others are worse off than to improve their own circumstances, educators and educational structures are so at risk that even the most moronic prediction may well prove prophetic.
Bruce Schneier put up a post about the inescapable truth that systems need trusted users, and the dangers inherent therein. He observes, “Replacing trusted people with computers doesn’t make the problem go away; it just moves it around and makes it even more complex.” He suggests a number of ways to reduce the risk of a breach by a trusted user, but also cautions that trying to cover every angle is ultimately a bad idea. The post concludes as follows:
In the end, systems will always have trusted people who can subvert them. It’s important to keep in mind that incidents like this don’t happen very often; that most people are honest and honorable. Security is very much designed to protect against the dishonest minority. And often little things—like disabling access immediately upon termination—can go a long way.
Of course, these precautions are applied to people who actually, you know, do things. Once you get far enough up the ladder that you aren’t really doing anything, you’re just figuring out ways to magically make money appear out of air, all bets are off and there’s no impetus of any kind to be either honest or honorable.
Hard to imagine how you can convince corporate types to behave any better on the basis of stuff like this.