Work: 2 key considerations about your future… or maybe I’m a renegade

Standard

I think a lot about work. Every aspect of work. Not my specific job or career but the overall concept of work.

And I always have. Even when I was in high school/college, I was trying to wrap my head around the different aspects of work. Work life, labor policy, pay, equality, office life, teamwork, reconciling being a non-conformist introvert with the “rah rah” of corporate cheerleading, recruitment, innovation and automation in recruitment, the shift from “pounding the pavement” to targeted online search and the role of technology in hiring and working, the economics of hiring, maintaining a workforce, building small businesses and startups, fitting into a corporate culture (or not) and finding one’s professional niche. I have thought a lot about the past (the “job for life”), the present (freelance/for-hire/impermanent job culture) and the future. All of this can include everything from education and how people learn and enter the workforce to how individuals can find just the right career and balance that works for them. It’s no more cookie cutter than anything else in life, but often it feels like the whole concept of work life is a conveyer belt in a factory making millions of the same commoditized, non-differentiated product.

No, not every company or job is alike. Very different cultures, industries, expectations… but when it’s boiled down to, for example, the job ad – the hook that gets someone to apply in the first place – there is very little differentiation. Recruiters can ask for different approaches to applying (for example, “send us a video and tell us about yourself” – but that just lights up all the pseudo-legal, proto-litigious lights in my head, “And open myself up for blind discrimination because I’m a middle aged lady?”) and change things up, but even the fresh wording in job ads is filled with subtle and not-so-subtle coding. A lot like real estate ads that describe a dilapidated shithole as a place with a lot of potential, if you just think outside the box and will just use your imagination, elbow grease and a lot of energy to turn it into your dream home, many jobs turn out to be the same.

And maybe these limit us – all of us. For example, I might see a job description that mentions how “young” and “fresh” the company is – I am immediately thinking about how environments emphasizing youth, a. probably don’t want anyone over 30, b. no one over 30 and/or really experienced wants to be there, c. the company probably demands much more than they give back, d. it would not be a good fit. And maybe nine times out of ten, it wouldn’t be a good fit. BUT… what if the job description was written by just one person who had a bias or interpretation and that is not at all what the job or company was about? Or, what if, like Microsoft, every job ad spewed into the world, read like it was written by a computer?

Thinking about limitations, probably the biggest concern/lingering thought I have on work pertains to remote work and home offices. I have long felt that technology would enable employees and employers alike to have their pick of the right fit regardless of geography (this has not managed to bear out the way I expected on a large scale). I’ve become semi-activist in my firm belief in remote/distance/distributed work and flexibility in the workplace. I’ve run my own business from a home office for 19 years without a hitch, but somehow most regular jobs and companies aren’t up to speed with that unless they are working with freelancers/outside “renegades”. So maybe I’m a renegade.

The point of this is that work takes up a lot of our lives. And we can end up feeling pretty miserable just because we take on a job (and stay in it) when it’s not the right fit. I read an article today that highlighted five things you need to make sure you do before you sign the dotted line on any new job.

From this, I took away key two points as an extension of the writer’s points:

It’s so tempting to just take the offer and put the job search to rest — but your career, not to mention your health and sanity, are more important than a quick close!”

This statement is true – no job is worth your sanity or health. You might need a paycheck, and you might say yes to a job that won’t be your career to pay the bills. But looking long-term, you’ve got to look for a good fit. BUT (!) what struck me here is the statement that one puts the job search to rest.

In this day and age, in an uncertain and even unstable economic climate and with the ease/automation of the search, does anyone ever “put the job search to rest”? Aren’t you always kind of keeping your eyes and ears open, feelers out and antennae up? Am I just crazy that I regularly update my CV, I keep an eye on the job market and in-demand skills, that I take on occasional freelance and volunteer opportunities, sometimes apply and interview for jobs (if not to get the job to keep the interview skills intact?)? Maybe because I have obsessed about work all my life this restlessness is to be expected, but perhaps a less obsessive but certainly thoughtful and measured approach (always having the job search at least casually open to possibilities) would be advisable.

The second point:

It can take nerves of steel to pass on a job opportunity, but if you’ve ever had the wrong job, you’ll know why it’s important to have standards.

The wrong job can shorten your lifespan.”

I agree on the stress and shortened lifespan. I’ve had some wrong jobs, and I found myself tied in knots, stressed, unable to sleep… and so blinded by the need for a job that I could not even recognize the signs until I had moved on to a new/better situation. Stay clued in when your mind, body and heart are trying to tell you something. It, as the above states, requires nerves of steel to say no – but you are your own best – and sometimes only – advocate. You’ve got to have the guts to say no, back out or take yourself out of the running if the fit just isn’t there or if you have doubts. Or even sometimes when your own life circumstances change and might render you temporarily the wrong fit for a job or company. I have finally learned to do this – for the most part. Sometimes it’s complicated, and a job offer (or job) has a lot of contingencies sucking you in like eight octopus arms squeezing you. Even after some let go, others still tether you there. Recognizing those tethers and figuring out how to ease your way free of them can be a good strategy.

But… what most struck me with this statement is not just that you should say no to the job offer but also that you should think seriously about whether to even go through with the interview – or subsequent interviews in the case of multiple interviews. Sometimes you see a job that looks perfect on paper. You read the ad and you check all the boxes and are ready for or need a new challenge. You apply. You are asked to an interview, but something about the initial exchange leaves you ill at ease. I have learned that this too is a test of will. When I was young and freshly out of college, just getting interviews was a triumph. I went to a lot of painful interviews for things I did not remotely want to do. Back then I sort of had to – but that marked me and influenced this idea that I couldn’t say no, especially because I was the one who had initiated the application process. But you can and should say no if something feels “off” – while you may well have been interested in the first place, interest cools – and you will thank yourself later for not putting yourself in an awkward situation (and for not wasting your own or a potential employer’s time).

It’s your life, your work. You don’t have to be a renegade but you also don’t have to settle for anything that threatens to kill you. If the wrong job can shorten your lifespan, at least find a way to dominate and enjoy the lifespan you have.

Online shopping, reality and speed – what is important in the customer experience?

Standard

E-commerce opportunities have revolutionized modern life. We don’t have to go anywhere or actively search for anything, going from store to store seeking out the one needed, elusive item. And rarities? Forget it. You can find it online with a little bit of attentive searching. The time-consuming hunt, the trying to find something out-of-print or a song with just a lyric like we did in the old days – it’s nothing now. And maybe we lose that sense of accomplishment and the appreciation of having something simply because of the work we had to do to get it. (The same principles were at work in my early refusal to get email – I wanted my paper-and-pen penfriend world to remain the same… rare, personal and full of anticipation.)

And while online shopping has certainly made my life richer (as stupid as that sounds), I recognize that the in-person, “instant gratification” retail shopping experience also is not going away. People want to feel, see, taste, touch, hear everything and experience something tactile, particularly in trying on clothes and the like, but at the same time, online shopping means you’re no longer limited to what you can find in a local shop or the mall or something. And that’s why I love it – I hate shopping in person.

How are some of the biggest e-commerce giants tapping into massive troves of data on shoppers’ habits and preferences to tailor and curate in-store shopping experiences and in turn, at the same time, drive shoppers back to the e-commerce platform?

Amazon.com’s new flagship store experiment is a case study in doing exactly this. When I read the Vox article about this store I was as perplexed as the writer initially had been. Why would the entity that pretty much singlehandedly made e-commerce a thing move back to brick-and-mortar? I never in a million years imagined that I would see this dubious store for myself, and yet the very same day as I read the article, I ended up at University Village in Seattle and went in the store. Nothing impressive, nothing that would make me want to go back. And I don’t need in-person employees to offer to help me, only to tell me, “Well, you can buy that online… on Amazon.” DUH! It seems like a really expensive experience to create that can only benefit a limited number of people, if it really “benefits” anyone at all.

I recently watched Aziz Ansari’s Netflix sitcom Master of None, and he lamented the horror of going online to try to replace a beloved pair of shoes only to find that they were out of stock – so he had to (insert exasperated tone here) actually go out to find and buy them in person. “Who has time for that?”

Precisely.

Internet of things = Big Data – Big Brother?

Standard

This summer, George Orwell, the frighteningly prescient author of the classic novel 1984, would have turned 110 years old. In honor of the big day, a Dutch art collective, FRONT404, decorated Utrecht’s ubiquitous security surveillance cameras with party hats in an attempt to remind us that these devices are there, always on. The artists state: “By making these inconspicuous cameras that we ignore in our daily lives catch the eye again we also create awareness of how many cameras really watch us nowadays. And [how] the surveillance state described by Orwell is getting closer and closer to reality.”

But the real surveillance state, if we want to call it that, is not necessarily as blatant as the camera on every street corner (although the cameras play their own big part). The real “surveillance” is in the data collected about you every day in your online dealings.

And contributing to the acceleration of this trend is the much-discussed “internet of things” (IoT) concept. A spate of articles about the popular IoT idea has churned through the media, mostly painting the rosy picture of convenience and ease enabled by connecting everything (did we learn nothing from the re-imagining of Battlestar Galactica about the dangers of networks?), but also covering topics, such as the challenges of keeping the “things” secure and the potential lines crossed in terms of personal privacy. But if we stop to consider a few of the basic applications of IoT, such as rental cars with “black boxes” attached to monitor renters’ driving – or insurance-company customers and their driving, there are implications. What is the line between the collection of beneficial data and the violation of privacy?

A recent TechCrunch article framed the “monitored driving” angle as though it’s mostly a positive, but does – and we should all be vigilant here – sound the alarm on the caution we need to take in weighing the implications. In this article it is presented as letting you take risk into your own hands and gain from a prevention-based versus reactive insurance claim model, but what do you give up for that? The insurance industry and its relationship with drivers/consumers is highlighted as a potential source of positive change through IoT and the application of data. Insurance companies want to use data to personalize your policies, which will supposedly make coverage and claims more reflective of your personal use. “The idea of ‘connected coverage’ means that insurance companies will encourage you to take risk management into your own hands by leveraging IoT. Ultimately, that could mean saving a big chunk of cash.”

Saving cash = good news! Right? Probably, yes. But the new “You + IoT + Provider = A New Dialogue” equation demands a greater vigilance than most consumers are willing to exert. Many compare the changes and conveniences enabled by IoT and Big Data to finally living in a “Jetsons” era. But the flipside is living under the watchful eye of Big Brother. We accept it because of its potential bonuses and benefits, but I ask again: where does insight end and intrusion begin? The pool of data available to entities in all industries will continue to proliferate – how can this be managed – treating you, based on the individualized data collected about you, as a unique customer, without penalizing you for the same body of behavioral data?

A Backchannel/Medium piece by Angus Hervey perfectly expressed the ambivalence I feel and the questions we should all be asking:

“A world where our entire physical environment has the ability to exchange data with the internet and other connected objects. A world that’s more convenient, more streamlined, and more responsive to our needs. It’s also a terrifying prospect. A world of ubiquitous surveillance, a world where privacy is no longer a guaranteed right but instead a privilege you must fight for. The possibility of data breaches, backdoors into home systems, vehicles being hacked by shadowy forces, are very real.

Start thinking differently about the IoT. Make sure you place it within its larger technological context, and join the vanguard that’s establishing new design practices and principles for how we’re going to manage it. It’s not more of the same. It’s something new. And once we get past that stupid name, it’s going to change the world.”

Upstart web browser renaissance

Standard

If you search for the term “browser renaissance”, you find a lot of articles from 2007 or 2009 but nothing “new” – something from 2009 written about technology may as well have been written in 1909. I wanted to see if anyone had written much about the birth of several new web browsers in recent months and had commented on the why behind these developments. Many times in the past (when I worked in the browser industry) we heard a lot of talk about the browser space being dead, or that one browser had won the war over the others, or that the browser would, if not go the way of the dinosaur, at least seem irrelevant with the proliferation of apps and connected devices.

Of late, though, we’ve seen big splashes (at least within the tech media) made by the new Vivaldi browser (brought to life by former Opera Software stalwarts), a Yandex browser and a promised Microsoft launch of a new browser (to replace the REAL dinosaur in the browser landscape, Internet Explorer).

What is driving this? Why now? Sitting awake sleeplessly on a Saturday night/Sunday morning, broad ideas spring to mind. Much like late May delivers almost no darkness in Sweden, some technology is as cyclical as seasons changing. Light disappears in Swedish autumn and winter, and reappears every year. Browsers are declared DOA, and like clockwork, are revived in new forms. This is an overly simplistic interpretation, borne of insomnia and an unwillingness to give it much more thought than that in this state.

As Opera has moved away from its former focus on browser features, Vivaldi has grabbed the baton and run with it, catering to what it calls “power users” (and tech fans of features).

Yandex has, particularly with its recent beta launch, focused squarely on privacy (outside its home markets).

And Microsoft… well, do we need to explain why Microsoft would need to murder IE rather than just let it go extinct? No. It needed to start from scratch. I suspect if I need to explain it, you would not have landed on this page in the first place.

So far I have only tried out Vivaldi and Yandex – I can’t say I am in love with anything. I am like most people in that I use different browsers for different, specific purposes, and I suspect my use of these new browsers will follow the same pattern.

Television is the new TV – The great disconnect

Standard

A few years ago when I worked in the tech industry, there was a lot of noise about “cord cutting” and how internet technologies could enable consumers to bypass expensive and inflexible cable companies. The vision at the time was just that – a vision that had not quite caught up to reality. But now we’re living in a slightly-different-than-imagined version of that reality. I know a lot of people who don’t have relationships with a cable company, and all their entertainment comes in some form of streaming and they can pick and choose, smörgåsbord style, what they want to buy into (or not). Of course there are still some constraints in terms of internet connectivity – with many people held hostage by the lack of choice in ISPs. But there has never been quite as much freedom to choose content and content source as there is today.

This got me to thinking, though, that even if we are essentially looking at content that we’d traditionally refer to as “television” – the sudden lack of “programming”, the ability to watch whenever and wherever, the ability to avoid advertising (or succumb to more targeted ads), the shift toward creating truly amazing stories and the elevation of “TV” shows to high art or at least something that surpasses two-hour film format storytelling by adding richness, depth, character building and production value – all of this means that we are witnessing the birth of something quite new. (One writer calls it “complex TV” but I would go so far as to argue that it is not TV at all.)

Can we call what we are watching “TV” just because it vaguely follows the same format? When streaming and binge-watching are becoming de facto – and shows are not necessarily created with traditional advertising streams in mind, tethers to certain templates are broken. Creativity is unleashed in new ways and places. We see small-scale, independent online production and exclusively online productions to complement traditional programming. We see “networks” creating original content, which was novel enough when it was no longer the big three American networks – Fox had been in the game for some time. But when paid cable got into the game, quality and diversity (and risk taking) became important. Ratings and audience share became less important. And when ratings still posed a challenge for some shows in one channel, it has grown likelier for another outlet to pick up the production in one way or another (some examples of this include Netflix running with long-dead Arrested Development to produce new episodes and a collaboration between different, non-traditional partners to continue producing critically lauded but ratings-challenged Friday Night Lights and Damages.) Online outlets got involved to become their own kind of networks – with Netflix leading the way and disrupting the whole model of keeping viewers on the hook for months as a story played out week after week on television. Where home entertainment, like DVD boxsets, unleashed the “binge watching”/marathon phenomenon, Netflix and later Amazon Prime were able to produce and release full seasons of high quality content whenever they wanted to (not beholden to any traditional “TV season”). Kicking that up a notch more recently has been Yahoo!’s step into the ring – reviving former NBC, perpetually on-the-bubble comedy weirdness Community.

This is still called “TV content”. But is it? When Netflix or Yahoo! bring an actual TV show from a network back to life through their own channels, is it still TV just because the show came from there? This week’s episode of Black-ish has the four kids talking in horror about how, in the old days, you had to watch content when it was scheduled or miss it forever. No pause button! No choices!

Are the methods by which we watch influencing how these shows are made, when they are released? And if this is not TV any longer, what is it? It’s not programming in the traditional television sense. And when a content provider releases entire seasons at one time, they have changed the entire production process. The content is not consumed, perceived or even built in the same way.

I recently read about how “television writers” are forced to evolve and create an end-to-end story when dealing with a full-season streaming show that is released all at once, while traditional network shows can alter the trajectory of a storyline that does not perform well or is unpopular with viewers (e.g. the storyline in which Kalinda’s husband shows up on The Good Wife. It was not well-received, so the writers scrapped it at their first opportunity). But there are no U-turns or detours when Amazon gives us an entire season of Transparent. In that way, full-season, binge-bait “content dumping” is like the release of a film, only a film is maybe two hours, and a show is 12 or 13 hours (or half that, in the case of half-hour shows) – assuming that any of these content creators decide in the long run to stick with the semi-traditional “duration” lengths. This could change, too. It already has changed to some degree.

As we disconnect from traditional methods of content consumption, we are consuming new things in new ways – we are not watching television any longer, even if we are watching our content ON an actual television.

User accessibility

Standard

When I read Jose Saramago’s alarmingly disturbing novel, Blindness, a number of years ago (oh, and please do not bother with the film version) – its vividness opened up a whole door to the world that I, as a sighted person, had never considered. Saramago provided an insight into the vast difference between being blind in a sighted world versus a whole world of blindness. How much do the “able” take for granted – whether it is vision, hearing, the ability to have unimpeded access to buildings or public transportation?

Having worked on and off in technology since the late 1990s, I have also been to conferences and events, where there always seems to be one or two people who yell longest and loudest about accessibility. It occurred to me, especially then – before technology was that convenient for everyone – that these activists had to yell that loud to be heard and considered.

While these concerns have been tangential to me, related thoughts about how accessibility affects everyone still come to mind. A close friend in Germany has visual impairments and has often written to me about the modifications and combinations of technology she requires to make her way through the world – but she lives a complete and full life. Most recently, as a part of her career, she has had to travel alone to new cities in different countries, so she is not only faced by the same hurdles of being in a foreign country that all newcomers to new places are, she has to navigate them with impaired sight. Technological advances have made this so much easier. Her recent travels to Stockholm, in fact, were particularly aided and enhanced by modern mapping technology, which she could – like all of us – access from her mobile phone. But many of the accessibility features that are convenient to us are essential to her.

Wired.co.uk’s junior writer, Katie Collins, discussed these very same issues in a recent article on navigating the London Underground with visual impairments:

“The London Underground can be a hostile environment at the best of times. If you have a visual impairment, though, it can be even more brutal.”

The article highlights the Wayfindr app, which of course is just one solution/aid for the visually impaired. Collins experienced traveling through London with a simulated visual impairment. Her article, in addition to pointing out vital aspects of the journey that travelers might otherwise take for granted given the use of all of their senses, explains how this app works (or can potentially work when and if it expands) to give the visually impaired traveler a sense of security and independence. As Collins rode the tube to the Ustwo design office (Ustwo created the Wayfindr app), she had people surrounding her – and stated that she would not have felt comfortable without them. Did the Wayfindr app give her the independence she hoped it would?

There’s a long way to go and so much potential – both for this and other apps like it. I complain a lot about the intrusiveness of technology, but in many cases, like this, it is tangibly improving people’s lives and increasing their mobility.

With the app delivering audio instructions and vibrating signals in this trial run, Collins did achieve greater independence – it was, she reported, the first time in the day’s journey that she felt comfortable on her own.

 

Data protection, use, rights and apathy

Standard

Do we have any idea what we are giving up in letting our data run free? Not really.

Watch the frightening documentary Terms and Conditions May Apply and start to get the idea. In our race to have speed, convenience, access and mobility – among other things – we are willing to sign away rights, privacy and protection for ourselves without even knowing it. Or in lacking the attention span or interest to follow things like privacy rights or something like the net neutrality debate in the US, we lose choice and transparency.

As John Oliver explained on his fantastic and revealing weekly HBO program, Last Week Tonight, discussing the net neutrality subterfuge, companies can bury all the information they are required to tell consumers but don’t really want them to read or understand in EULAs. Much of the time, these terms and conditions are innocuous but some are quite malicious, misleading and violate user privacy, leaving most users uninformed and having given blind consent.

At 9:50:

“The cable companies have figured out the great truth of America: if you want to do something evil put it inside something boring. Apple could put the entire text of Mein Kampf inside the iTunes user agreement and you’d just click ‘Agree’.”

It’s one thing to just complain and worry about data collection and use – but what kinds of solutions may exist? Craig Mundie’s piece in Foreign Affairs addressing the issue. “The time has come for a new approach: shifting the focus from limiting the collection and retention of data to controlling data at the most important point — the moment when it is used.”

Some kind of change has to happen because “… there is hardly any part of one’s life that does not emit some sort of “data exhaust” as a byproduct. And it has become virtually impossible for someone to know exactly how much of his data is out there or where it is stored. Meanwhile, ever more powerful processors and servers have made it possible to analyze all this data and to generate new insights and inferences about individual preferences and behavior.”

Interestingly, Mundie cites the introduction and eventual ubiquity of credit cards as the truly disruptive technology that opened the consumer-data floodgates. Did anyone imagine that the truly disruptive technology – well before the internet – was the credit card? They open so much access for financial institutions to create credit reports and scores and to basically control a person’s life based on their spending and saving habits, to keep tabs on her location, habits, tastes, propensities – it’s a gold mine of data that financial institutions could sell to retailers – so much opportunity for consumer exploitation. Consumers, though, have trusted that this would not happen because of data handling and storage regulations.

But once the floodgates were open, and regulations in place – the internet came along. But data privacy and rights have not changed to keep pace with how industry and technology have changed.

The part that is most alarming for me when I think about it is that whole business models and companies are built on this virtually free access to, collection of and manipulation, analysis, sale and packaging of data. How many of us are actually employed in industries whose bread and butter is somehow a link in that data collection and use chain?

Are the trade-offs of allowing all this data collection worth it? The Mundie article cites the public good as one reason not to entirely do away with data collection (but to limit/change it). One example is in a case when vast data sets yielded key findings in medical research, which can benefit society as a whole. But does that supersede the right of the individual not to have their own personal data used in some way to which they have not expressly consented? (Opting into a serpentine user agreement as a layperson does not really signify consent in my mind.)

Solutions that Mundie proposes are interesting but fail to take into account personal laziness. People like talking about having their privacy violated, but if taking control meant, as the writer suggests, “It would also require people to constantly reevaluate what kinds of uses of their personal data they consider acceptable” and one would have to take personal responsibility for context and assessing the value of how their data were used, almost no one would do it.

People do not want to evaluate at all – which is why they just say yes or no in the first place – expedience, convenience. Damn the consequences.

Downsides to Cord Cutting

Standard

Sometimes I would love nothing more than to get rid of all my cords and cables – there is a tangle of them in my living room powering my multiple computers, my speakers, my jumble of mobile phones, among other things. Streamlining this would be great – and as I have written before, I wish I had a true mobility solution for my mobile (I wrote about the Revocharge charging system the other day, but my only complaint there is that it is part of a Kickstarter campaign and not available for sale right now. And that’s kind of the trouble with a lot of great potential solutions – they are ideas or prototypes that are either not on the market or not market ready).

Similarly I am someone who has been living in the middle of the Swedish forest for years without ever connecting to TV or cable so never even got hooked up in that sense at all. I get all the entertainment I need with strictly online solutions.

But today I saw a Tweet from Adam Dachis about times when cutting oneself free of cables may compromise quality of experience. I would rather hassle with cables too under such circumstances, even if going without cables is liberating. It’s far more important to me, though, to find a way to be free of cables when I need my phone to stay charged when I’m wandering the world than to have a seamless and high-quality wireless headphone experience.

Listener experience is absolutely key for music lovers – and then having to recharge anyway makes it doubly inconvenient. No, Adam, you’re not the only one.

Sometimes cables trump convenience

Sometimes cables trump convenience

 

Kickstarter and crowdfunding: Traction or drag of the crowd

Standard

I think a lot these days about the crowdsourcing revolution. Whether it’s crowdfunding in the form of Kickstarter and its peers, or crowdhosting like Airbnb, or crowdsharing of information, like on sites such as Trustpilot or Yelp, these things definitely have their good and bad sides.

Today: Crowdfunding

Many times in recent weeks I have been traveling – and every single time, I face some kind of phone-charging crisis. I don’t think I am alone in this. We’re all busy and counting on our phones as our connection to the world – to stay in touch, to take and send photos, to do our online banking (in fact if my phone dies and I lose access, I can’t access my online bank at all). And now that the TSA is apparently asking people to turn their electronic devices on to prove that they actually are working devices, having a charged phone while traveling is a necessity for security reasons. Since I am one of those people who worries when there is not even a reason to worry, I am always thinking about whether I have the right cable, or where I might find a power outlet wherever I happen to go. I know from experience that the phone battery is only going to last X number of hours, maybe fewer hours if I engage in more activity – and that’s a strangely helpless feeling, especially when you’re in the middle of Budapest or sitting in one of those not-so-business-friendly airports that has NO power outlets anywhere.

With this panic in mind, I often flip through projects on Kickstarter and Indiegogo to see what kinds of things might solve my problems. One day I found a smartphone keyless door lock, the Goji, on Indiegogo which got me pretty excited since I live in multiple places and often panic about what might happen if I lose my key in one city and arrive at one of the other places to find that the key is missing? (My neighbors have keys – but what if they aren’t home? And maybe I don’t want neighbors to have keys. A keyless locking system controlled by mobile phone would let me give them immediate access if they needed it – but then rescind it just as easily so the nosy old lady up the hill doesn’t just come in whenever she wants. Haha!)

And recently I found the Revocharge system – which is a magnetic, snap-on battery and case for iPhones and Androids. This might not have excited me to such a degree had I not just experienced a series of on-the-go battery failures, the elusive hunt for a power outlet and then losing the one power cable I had for my iPhone 5 while wandering around in Berlin. Does the Revocharge solve all the problems? No, you still have to not lose the battery or the case – but the chances are good that they would be connected to the phone anyway – it is not like some stray cable that could fall out of my bag or be left anywhere. My only disappointment, of course, is that this is not available right now! It’s still seeking Kickstarter funding. (For that matter, the Goji is not shipping yet either. AND… if I want to operate all my door locks from my smartphone, I need to have my phone charged all the time, too! So these products go hand in hand… our lives are more entwined with our phones – we can’t afford to let them die!)

When it comes to successful crowdfunding campaigns, though, I keep looking at different campaigns and am never really sure what propels some of them to success and not others. The two aforementioned campaigns absolutely serve real needs and are not “pie in the sky” ideas – both exist (at least in prototype form). In the case of Revocharge, it is addressing a universal problem. This campaign has a long way to go, so its funding goals may be met.

But I wonder about some of the campaigns that create a desire – that definitely do not serve a real need. Case in point: the “coolest cooler”. Serves NO need at all – and has more than 8.5 million dollars pledged to its cause. Or the campaign that famously set out to get money to make potato salad.

Why are people inclined to give money to something that is gimmicky and has no real real-world application?

Silicon Valley – Mean Jerk Time

Standard

I keep saying it – and laughing about it – but Silicon Valley is a bloody hilarious show. You really must watch it if you haven’t been.

The most recent episode – the season finale – in which the guys spend hours working on equations to figure out how they could most efficiently jerk off the greatest number of guys in the Tech Disrupt audience cracked me up.

Also cool – seeing the Opera Software logo prominently displayed in the background. The good old days.