The US-based tech company that just went public in London

The US-based tech company that just went public in London

Alex Wilhelm is the editor-in-chief of Crunchbase News and co-host of Equity, TechCrunch's venture capital-focused podcast.

More posts by this contributor:

Boku, a United States-based carrier billing company, listed on the London Stock Exchange's Alternative Investment Market (AIM) recently, selling £45 million in stock. Only about one-third of those shares were from the company, however, with the rest sourced from extant shareholders.

The flotation is interesting, given where Boku is based, how much it had previously raised and from whom and how much it is worth post-IPO.

The IPO conversation here in the Bay Area spirals around unicorns and their ability (or not) to meet their last private valuations. But that situation doesn't apply to every company that will go public.

Let's peek into the Boku offering to see what happened and what we might learn from it.

The flotation

Boku sold 76.2 million shares of its equity in its debut at 59 pence apiece, bringing in just under £45 million. The income was split roughly by one-third for the company, and around two-thirds for what Interactive Investor called "existing shareholders."

Shares of Boku closed the day at 73.50 pence, up 24.6 percent. That's a very healthy first-day pop. The company's £125 million IPO valuation (post-money) is now worth, including a currency conversion, $206 million, give or take.

And now you can see why this is all quite interesting. What sort of company goes public when it is worth just a few hundred million? Well, as it turns out, the AIM is a place built for smaller companies to float on — not everything has to be the Big Board.

Last year, I caught up with James Clark, who works for the London Stock Exchange, to better understand why smaller companies might want to list on the AIM. Here's what he said:

AIM, London Stock Exchange's market for smaller, high growth companies was created to provide the optimum conditions for small and mid cap growth companies—i.e. valuations in the tens to hundreds of millions of pounds.

Because of the scale of the US exchanges, companies at this sort of valuation can struggle to cut through the noise. When they can even get onto market, we have found they can struggle to attract top tier investors, may have to offer deep discounts on their issue price, and run the very real risk of becoming so-called "orphan stocks" lacking adequate coverage from analysts.

Smaller, high-growth companies going public? Let's look at Boku's book to figure out how it fits the mold.

Boku by the numbers

What initially snagged our attention in all of this was the combination of San Francisco headquarters, Silicon Valley money and London IPO.

What would bring a company into that particular milieu? As it turns out, a company that is nearing a decade of life that has raised quite a lot of capital and just found new wind.

Boku, founded in 2008, according to Crunchbase, has raised nearly $90 million across rounds stretching back to 2008. Benchmark, Index and Khosla hopped in back in 2009, a16z in 2010 and New Enterprise Associates led the company's $35 million Series D in 2012. The firm tacked on $13.75 million in late 2016.

>From its filings, here's what you need to know about the firm's recent growth. Foregoing its TPV (payment equivalent of GMV), here's its revenue for the last three full years for context:

  • 2014 full-year revenue: $14.2 million.
  • 2015 full-year revenue: $15.2 million.
  • 2016 full-year revenue: $14.4 million.

Here are the firm's last two half-year results:

  • 2016 H1 revenue: $8.4 million.
  • 2017 H1 revenue: $10.2 million.

And, the "Underlying EBIDTA" results from the same two periods:

  • Underlying EBIDTA H1 2016: -$7.2 million.
  • Underlying EBIDTA H1 2017: -$2.8 million.

What does all that mean? That after a few years of uneven results, the firm has a solid growth number in place for the first half of 2017, along with improving profitability.

Not a bad time to go public, really.

(And if you are worried that the firm could return to the negative growth that it saw before, bear in mind that this is a smaller IPO.. The stakes here are smaller than when a unicorn goes public while keeping its horn.)

Smaller IPOs: so what?

Before we got underway, we stated that Boku was interesting for a few reasons, including "where Boku is based, how much it has raised and from whom and how much it is worth post-IPO."

We can deal with that mix head-on now, and in the process, answer our just-stated, final question.

What is interesting about Boku's headquarters is that it's a full ocean away from its trading market. Seeing an American tech company go public on a British exchange isn't unheard of. But I can't recall another venture-backed, U.S.-based company going public in a similar fashion (biotech aside).

Which brings us to Boku's fundraising. Its list of backers is a power list of Silicon Valley's venture class, and London's AIM helped provide liquidity to some of America's well-known money kids.

Finally, the firm's value at just over $200 million represents a smallish exit for a company that has raised around $90 million. But, notably, it is an exit of sorts, and one that doesn't involve crushing the firm into a purported open niche inside a corporate giant.

Crunchbase News reached out to the London Stock Exchange concerning the Boku IPO in particular, to which the group responded:

This IPO again highlights that LSE, particularly via our growth exchange AIM, has a track record of offering small and micro cap tech companies access to high quality capital at lower cost and reduced regulatory burden relative to US public markets. It also shows VC shareholders can diversify funding for portfolio companies and often achieve partial exit through a London IPO.

Fair enough, really, given what Boku pulled off.

We don't hear about many small or mid-cap tech IPOs here in the States, at least not at the moment. Perhaps, however, we'll see a few more like Boku?

In that vein, we also asked about the pacing of U.S.-based companies listing across the pond. To which the LSE responded with data that implies that we're a bit behind on the trend:

Crucially, history shows that London blue chip institutional investors are comfortable with companies with much smaller annual revenue (sub $10 million in some cases) and market caps (most AIM IPOs have market caps in the $50 million-$300 million range) than typically seen in US IPOs.

We have had about 20 North American companies from various sectors list in London this year with market caps ranging from c.$10 million up to $3 billion and have seen a significant increase in interest in the tech sector that we now have a number of US tech deals in our IPO.

We'll be on the lookout for the next Boku. More when it lists.

Featured Image: Bryce Durbin/TechCrunch

10 Things to Know About Marketing AI | Appboy

10 Things to Know About Marketing AI | Appboy

Artificial intelligence is starting to transform marketing—so make sure you're up to speed

AI has a long history as a marketing buzzword. But while we've been through a number of boom-and-bust hype cycles around artificial intelligence, recent technological developments and the increased availability of systems leveraging artificial intelligence have made it impossible to completely shrug off AI as vaporware or a fad.

It's (finally) time for brands to start thinking about AI. So, let's take a quick and dirty look at artificial intelligence—what it is, why you should care, and what it means for your business now and in the future.

Appboy's VP of Product Kevin Wang Discusses AI at LTR 2017

1. AI can mean multiple things

When people imagine artificial intelligence, they tend to think of its depiction in pop culture—you know, futuristic robots and malevolent computers, all biding their time until the humans aren't paying attention so they can take over the world. But while that's the popular image of AI, when we talk about artificial intelligence in the context of today's technology, we're not talking about Terminator-R2D2–Blade Runner AI: we're talking about something different.

What is AI, really? It's any artificial system that's capable of performing tasks that, traditionally, people had to do.

2. Current AI isn't General AI

AI Diagram

Pop culture generally depicts AI that's the mental equivalent of a human being in all meaningful ways. That's "General AI." We don't have that yet—and if we do get it, most experts think it's decades away. What we have instead is something called "Narrow AI," which is powerful but more limited in scope. Narrow AI often means machine learning, which leverages models that automatically adjust themselves based on inputs they receive. Machine learning's a subset of AI, but an important one, since most of today's tech uses it.

The key thing about Narrow AI is that it's fundamentally a system that's capable of performing very, very specific tasks as well or better than a human being could. And that, we have in abundance. Think Watson, or self-driving cars. Think AlphaGo, which is specialized to play board games. These are powerful tools, but they're specialized—AlphaGo can't pilot your self-driving car (and shouldn't, frankly).

3. AI can't do everything

The thing is, a narrow AI like AlphaGo is designed to carry out one very specific task and to do it extremely well: win at Go. There are rules that govern the game, with strategies and possible situations that govern different outcomes, and AlphaGo's on top of all of that. But, ultimately, it's a one-trick pony—even if it is really, ridiculously good at that one trick. Try as it might, your self-driving car is unlikely to beat you at Jeopardy, at least in the near future. And that's true of basically all of today's AI.

4. With AI, it's all about the goal

When it created AlphaGo, Google's goal was to create a system that was incredibly good at playing Go—and they succeeded. Same thing with Deep Blue and chess, or personal assistants like X.ai and scheduling meetings.

But while Narrow AI is great at reaching those kinds of focused goals, it's not good at picking goals for itself—which means that the success or failure of modern AI depends in part on how good a job the humans overseeing the AI did at focusing it on the right task and feeding it the right data.

5. You're already using AI

That's the thing about AI: it's exciting as long as it's new and innovative, but as soon as it's not, people start to go, "Wait, this can't be AI—this is everywhere." And it really is everywhere. Email spam filters. That's AI. Amazon's product recommendation engine? That's AI. Even smart-dragging a series of numbers in Excel is technically a simple rule-based AI. Face it—AI's already here, and you're already reaping its benefits.

That's especially true if your brand takes advantage of Appboy's lifecycle engagement platform: when you use Intelligent Delivery to make sure that your messages reach each customer when they're most likely to engage or leverage Canvas with Intelligence (which lets you orchestrate smarter customer communication), AI's making that possible.

In general, though, we only call something AI if it impresses us. No one says, "Wow! Spam filters! I've seen the future and it's incredible!"

6. Mobile is driving an explosion of AI development

You know what AI loves? Data. It's basically the fuel that all of today's artificial intelligence systems run on. Without significant high-quality data, it's hard—or impossible, even—for AI to accomplish anything significant.

One big factor in the recent vogue for AI is the massive increase in available data that came with the rise of mobile and a more connected world. It's no big secret: customer data collection is easier—and more detailed—than it's ever been. People carry their smartphones with them everywhere, even sleep with them, and if you're a brand with your location data game in order, it's possible to target customers for messages based on where users live, where they go, how often they engage with your app or website, and the products and services they care about.

All of those data points can be fed into a well-designed AI system, and with those kind of massive data sets at their disposal, it's possible for an AI to identify patterns or pull out insights that a human marketer would miss—and to do it almost instantly. Those insights can do a lot to make your marketing efforts easier, smarter, more effective.

7. AI isn't a magic bullet

I know we've covered this, but it bears repeating: AI isn't like in the movies.

You can't just go to your computer and say, "Okay, Google, make my marketing better," or "Alexa, sure would be great to have higher user retention," and have it happen. You have to prepare your AI with high quality data, and train it to accomplish the specific, task that you care about before it's going to be able to do anything impressive.

Maybe that's a little disappointing to hear, but let's be honest: even if your company had access to the kinds of artificial intelligence you see in the Terminator movies, would you really want to leave your customer engagement strategy in Ahhnold's hands? (Don't think so.) That's the temptation and the danger of AI for a lot of companies—there's something seductive about imagining that you can just sit back and let computers make your marketing work. But that's not real life. And if it were real life, you'd probably be out of a job.

8. Before taking advantage of AI, you need to nail the fundamentals

AI can be really cool. Intelligent Delivery is a really powerful tool, and Appboy's new AI-powered Canvas features are going to make a lot of marketing campaigns way more effective. But here's the thing: to fully leverage AI, you need to do the basic things right.

Using AI to optimize your brand's promotional outreach can do a lot to boost your ROI, but not if the people who download your app don't stick around long enough to receive any of those promotional campaigns. It's not rocket science. You've got to walk before you run, and you've got to get the basics right before you can make the most of artificial intelligence as part of your customer engagement efforts.

9. To make the most of AI, you have to identify places where humans and computers can work together

Robot dog

Today's AI is great at processing vast quantities of data to find hidden patterns and actionable insights, and making high-speed decisions in connection with a specific goal. People, on the other hand, are way better than computers at creative thinking. And that's a great thing.

AI shouldn't replace your marketing team—it should make your marketing team that much more effective. That's the thing about a great AI system: it's like having a really amazing hunting dog. Lassie knows how to smell where the foxes are and how to track them down, but she's got no idea what to do when she finds them, or why, exactly, we're hanging out in the woods in the first place. That's where people come in.

AI is the muscle. We're the brains of the operation.

10.. Right now, AI is a nice-to-have, but the time will come when it's a necessity

It's incredible how much data marketers have at their disposal today, compared to ten, twenty years ago. And the tools we have for reaching customers would have sounded like something out of science fiction years ago. But, the way things are going, in another ten, twenty years, we're going to look back at 2017 and think "how did marketers ever do their jobs without having the technologies I've got now?"

As time passes, technology always makes things faster, easier, and more efficient. And while we grapple with the challenges and opportunities that are going to come with tomorrow's larger, even more detailed customer needs, AI is only going to become more essential to the way companies interact with their customers. Right now, it's totally possible to provide your customers with a best-in-class brand experience without using any AI. But in 10 years, who knows? The companies that pay attention now and start thinking about how they can use AI to build stronger, more sustainable customer relationships today are going to have a major advantage in tomorrow's marketing landscape.

You know, if the killer robots don't get us first.


How Cargo Cult Bayesians encourage Deep Learning Alchemy

How Cargo Cult Bayesians encourage Deep Learning Alchemy

Credit: Photo by Katherine Hanlon on Unsplash

There is a struggle today for the heart and minds of Artificial Intelligence. It's a complex "Game of Thrones" conflict that involves many houses (or tribes) (see: "The Many Tribes of AI"). The two waring factions I focus on today is on the practice Cargo Cult science in the form of Bayesian statistics and in the practice of alchemy in the form of experimental Deep Learning.

For the uninitiated, let's talk about what Cargo Cult science means. Cargo Cult science is a phrase coined by Richard Feynman to illustrate a practice in science of not working from fundamentally sound first principles. Here is Richard Feynman's original essay on "Cargo Cult Science". If you've never read it before, it great and refreshing read. I read this in my youth while studying physics. I am unsure if its required reading for physicists, but a majority of physicists are well aware of this concept. Feynman writes:

In the South Seas there is a Cargo Cult of people. During the war they saw airplanes land with lots of good materials, and they want the same thing to happen now. So they've arranged to make things like runways, to put fires along the sides of the runways, to make a wooden hut for a man to sit in, with two wooden pieces on his head like headphones and bars of bamboo sticking out like antennas — he's the controller — and they wait for the airplanes to land. They're doing everything right. The form is perfect. It looks exactly the way it looked before. But it doesn't work. No airplanes land. So I call these things Cargo Cult Science, because they follow all the apparent precepts and forms of scientific investigation, but they're missing something essential, because the planes don't land.

The question that Feynman brings up is whether a specific practice of science is based on experimental evidence or one that just looks like scientific inquiring but is based on questionable foundations. IMHO, Bayesian inference is one of those questionable forms of scientific inquiry. It has its roots in a 18th century conjecture:

https://en.wikipedia.org/wiki/Bayes%27_theorem

Judea Pearl pretty much summarizes the issues with Bayesian thinking in an article published in 2001, he writes:

I [Pearl] turned Bayesian in 1971, as soon as I began reading Savage's monograph The Foundations of Statistical Inference [Savage, 1962]. The arguments were unassailable: (i) It is plain silly to ignore what we know, (ii) It is natural and useful to cast what we know in the language of probabilities, and (iii) If our subjective probabilities are erroneous, their impact will get washed out in due time, as the number of observations increases.
Thirty years later, I [Pearl] am still a devout Bayesian in the sense of (i), but I now doubt the wisdom of (ii) and I know that, in general, (iii) is false.

Marcus Hutter in "Open Problems in Universal Induction & Intelligence" writes:

Strictly speaking, a Bayesian needs to choose the hypothesis/model class before seeing the data, which seldom reflects scientific practice.

So to summarize, its doubtful if knowledge is represented by probabilities. Erroneous observations aren't corrected and it's impossible to do if you aren't allowed to inspect the hypothesis as a guide to selecting the prior. Bayesian inference is loading with too many issues that its use is highly questionable.

Yet, Tenenbaum in 2011 "How to grow a mind: Statistics, Structure and Abstraction" explains the essence of Bayesian inference:

At heart, the essence of Bayes rule is simply a tool for answering the question: How does abstract knowledge guide inference from incomplete data?

However Bayesian inference has no guidance of how to select an initial prior and has no evolution mechanism of how knowledge changes given an initial prior. Underneath the covers, there is no engine to speak of. It's like describing a car by observing only its external body and its wheel but completely ignoring an engine inside. That's because statistical methods are only descriptive.

If this rule were indeed axiomatic as its proponents contend, then what then is the opinion of physicists with regards to this? Physicists who are aware of the perils of Cargo Cult science should certainly be able to spot a questionable approach. The late David MacKay wrote a well known book "Information Theory, Inference, and Learning Algorithms" where he explores machine learning from the perspective of Information Theory. Mackay's book should be required reading for every Deep Learning practitioner. David Mackay is a physicist by training, he writes in his book:

In this book it will from time to time be taken for granted that a Bayesian approach makes sense, but the reader is warned that this is not yet a globally held view — the field of statistics was dominated for most of the 20th century by non-Bayesian methods in which probabilities are allowed to describe only random variables. The big difference between the two approaches is that Bayesians also use probabilities to describe inferences.

He then further devotes an entire chapter on "Bayesian Inference and Sampling Theory". Here he writes:

This chapter is only provided for those readers who are curious about the sampling theory / Bayesian methods debate. If you find any of this chapter tough to understand, please skip it. There is no point trying to understand the debate. Just use Bayesian methods — they are much easier to understand than the debate itself!

The only people who understand Bayesian inference are the Bayesians themselves. The only way to understand them is to drink their Koolaid. All arguments are dismissed because you don't understand what Bayesian means.

The statistical community have a habit of making arguments on the basis of obscurity. Here's a 2014 speech by John Rauser that highlights the problem:

The practice of statistics is in fact closer to alchemy that that of science. Take a look at this ridiculous taxonomy of univariate distributions:

http://www.math.wm.edu/~leemis/chart/UDR/UDR.html

The method of argument in statistics is to throw in some combination of distribution from above and use these as your assumptions (i.e. prior) as to how you arrive at a conclusion. It's alchemy disguising itself in the language of mathematics. It is not enough to give names to different kinds of distributions and mix it all up in the cauldron of Bayesian inference to arrive at a conclusion.

It is non-sensical for those who grew up understanding computation. How is this practice any different from the multitude of theories proposed by linguists to understand language? I guess Fred Jelinek was on to something fundamental when he remarked:

Every time I fire a linguist, the performance of our speech recognition system goes up.

Perhaps there is an equivalent to this in deep learning? "Every time you fire a statistician or Bayesian, then the performance of your deep learning system goes up." ;-)

The legendary Isaac Newton was in fact very involved in alchemy. Here's an image of his manuscript on the subject of transmutation for gold:

https://news.nationalgeographic.com/2016/04/160404-isaac-newton-alchemy-mercury-recipe-chemistry-science/

Isaac Newton was also from the 18th century, just like Thomas Bayes. When you don't have a foundation of strong first principles, then everything is alchemy. It's the human mind's natural state to keep on making up stuff just because we observe patterns often enough. Repeating falsehoods often enough doesn't make it true, yet humans are susceptible to this cognitive bias. (The last sentence looks awful like Bayesian inference!) At best, Bayesian inference is a human heuristic that masquerades itself in seemingly logical mathematics.

Chemistry exists because we understand the first principles of how atoms can be combined (derivable from quantum physics). The first incarnation of the periodic table of elements actually came before quantum physics. It was derived experimentally and only after centuries did they formulate an elegant explanation on the configuration of electrons in a valence shell of an atom..

The real reason why others don't understand Bayesian inference is because they recognize Cargo Cult science and can't believe seemingly intelligent people steadfastly believe in this.

Well, Bayesian methods are a belief system. It is not very different from Occam's razor, that is explanations must be simple. To be perfectly fair, physicists also have their own belief system, one of them is that there exists a Grand Unified Theory. This was Albert Einstein's goal all the way to the end of his life. The difference of course is that in the quest for knowledge, one's belief system should remain only as a motivation for that quest and not the explanation of everything.

There was a time before the advent of Deep Learning that Bayesians were rulers of the Machine Learning field. Max Welling captures this in his essay "Are ML and Statistics Complementary?". Welling writes the following:

Also, the previous "hype" in machine learning (before deep learning) was about nonparametric Bayesian methods, clearly a core domain of statistics. At the same time, there are cultural differences between the two fields: where statistics is more focussed on statistical inference, that is, explaining and testing properties of a population from which we see a random sample, machine learning is more concerned with making predictions, even if the prediction can not be explained very well (a.k.a. "a black­box prediction").

Former rulers of the ML community do come from a Bayesian background and this explains why many papers in Deep Learning are explained from a Bayesian viewpoint. I've argued elsewhere why it is an incorrect viewpoint, however like many things in human discourse, it's very difficult to dislodge orthodox thinking. The old guard will fight to the death to preserve their mysterious way of thinking.

This old guard would like one to believe that all inquiry should be framed in Bayesian terms. They borrow or steal ideas from other methods of inquiry and regurgitate these as being of Bayesian origin. One clear example is the use of variational methods. These methods are of statistical mechanics origin, however they've recast the techniques as originating from Bayesian thinking. Yann LeCun, in a FaceBook post, documents the history of these methods, he writes:

the main concepts were inspired by statistical physics, not by Bayesian statistics, AFAIK, the authors were unaware of the Bayesian inference literature at of the time.

He writes this in context of a paper written by Yarin Gal ( a student of the prominent Bayesian Zoubin Ghahramani). LeCun writes that Gal miscredits the origins of several papers as being of Bayesian origin which he refutes on personal historical grounds. The Bayesians are indeed colluding to extend their influence on the nascent Deep Learning field. Work using statistical physics and information theory approaches are being deconstructed and explained as being Bayesian when the authors have never subscribed to said belief system.

My perspective of Deep Learning is that its an experimental science. Our experimental apparatus is the massive computation that we currently have at our disposal. These computer systems serve as a way for us to discover emergent predictive behavior that arise from homogenous simple computational elements (i.e. artificial neural networks).

Vladimir Vapnik who comes from a different (and more formal) machine learning discipline (see: SVM) has the following beliefs about machine learning in general:

Vapnik posited that ideas and intuitions come either from God or from the devil. The difference, he suggested is that God is clever, while the devil is not. … Vapnik suggested that the devil appeared always in the form of brute force. Further, while acknowledging the impressive performance of deep learning systems at solving practical problems, he suggested that big data and deep learning both have the flavor of brute force.

This idea that discoveries arrive through brute force (computation) emphasizes the current experimental nature of the Deep Learning field. Vapnik's arguments are more on testament of his belief system and a gut feeling that current lack of theory is problematic. The theories that exists out there are extremely weak and a majority of the theories are poised in questionable Bayesian terms. There are of course alternative theories that originate from the field of Information Theory (Tali Tishby) , Statistical Mechanics (Surya Ganguli) or even from Cosmology (Max Tegmark).

Theoretical progress in Deep Learning should not be hindered by historical baggage like Bayesian methods.. There are many more advanced models of reality that come from other fields such as Complexity Science, Critical Phenomena, Non-equilibrium statistical mechanics, Chaos Theory and Cybernetics that I would like to see applied to the explanation of Deep Learning.

The problem with Bayesians is that they don't understand domain of applicability of their belief system. A paper "Statistical physics of inference: Thresholds and algorithms" goes in great detail regarding this question. You are more than welcome to pour your intellectual energies into this study. To summarize that paper, the answer of Bayesian applicability depends on the how much information you have prior. Unfortunately, reality is not so kind to provide one with perfect information.

It is entirely a travesty that a majority of Deep Learning explanations are framed in a dubious and antiquated belief system. One would think that there's a conspiracy going on that favors Bayesian theories over unfamiliar theories using unfamiliar vocabulary and mathematics. We need more powerful mathematical tooling to analyze discoveries in Deep Learning, otherwise it will forever remain in its current state of alchemy.

Editor's Note: For all those who keep complaining about this post, let me be perfectly clear: "Bayesian inference is a human heuristic". It is not a fundamental theory, it is by design a subjective form of logic and therefore is disingenuously used in many places were it should not. See Pearl ( http://ftp.cs.ucla.edu/pub/stat_ser/r284-reprint.pdf )

Additional Reading

https://arxiv.org/abs/cond-mat/9809190

Pre-release: Artificial Intuition: The Deep Learning Revolution

CEOs Should Leave Strategy to Their Team — and Save Their Focus for Execution

CEOs Should Leave Strategy to Their Team — and Save Their Focus for Execution

Executive Summary

The common perception is that strategy is done at the top of the org chart, and execution is done below. It is exactly the opposite. First, consider that the act of "execution" is remarkably similar to the act of strategy: both are about making a series of choices about what to do and what not to do, about where to play and how to win. If a choice becomes untenable downstream, then all the upstream choices must be reconsidered. But if there is a difference between strategy and execution, it's this: execution is the act of setting up that series of choice cascades, identifying the manager responsible for the choices in each cascade, and following up to ensure that they make the choices for which they are responsible. In other words, strategy is the act of making choices about "where to play" and "how to win" across the various levels and parts of the organization.  Execution is the act of parsing out responsibility for those choices, making sure people actually choose (instead of waffling around in indecision). That means execution is really the C-suite's responsibility, and strategy belongs to the front line.

nov17-22-531521822-Awakening
Awakening/Getty Images

The common perception is that strategy is done at the top of the org chart, and execution is done below. It is exactly the opposite – let me explain why.

First, I should explain that I have always hated the use of the term "execution." Its common definition is fundamentally unhelpful, and contributes to what executives often call "the strategy-execution gap."

Usually when businesspeople talk about "strategy" and "execution," the former is the act of making choices and the latter the act of obeying them. My quibble with this characterization is that the things that happen in the activity called "strategy" and the activity called "execution" are identical: people are making choices about what to do and what not to do. In my 36 years of working with companies, I still haven't seen an example of a strategy that was so tightly specified that the people "executing" it didn't have to make major choices—choices as tricky and important as the so-called "strategy choices" themselves.

Insight center

  • Sponsored by the Brightline Initiative

    Aligning the big picture with the day-to-day.

For example, imagine that the CEO's chosen strategy is to differentiate on the basis of superior "fit and finish," e.g., the flawlessness and detail-orientation of her products. She asks her EVP of manufacturing to please go execute that strategy.  That strategy is not sufficiently specified for the manufacturing EVP to just do it without needing to contemplate making some a number of key choices –nor will it ever be. What are the various plausible ways of beating my competitors on "fit and finish"? Which has the highest probability of success?  Is it even a plausible way of winning against competitors who already focus on "fit and finish"?

Since those choices look remarkably similar to the kind of choices made by the CEO, it begs the question:  Why on earth do we call the CEO's choices "strategy" and the EVP's choices "execution?" Of course, as people are wont to point out, the EVP's choices are constrained by the CEO's choices, so aren't they fundamentally different? That would only be a valid argument if the CEO's choices were truly unconstrained. But ask CEOs and they will give you chapter and verse about the many constraints they face, from capital markets to boards of directors to regulations.

In complex organizations, there is very little choiceless doing.  Even after the Manufacturing EVP decides how to differentiate on fit and finish, his SVP Plant Operations will have to make some important choices, as will her VP of Plant Logistics, and so on. The is why my conception of strategy work in organizations is as a series of interconnected choices: What's our winning aspiration? Where will we play? How will we win? What capabilities do we need in order to win? And what systems do we need in place to manage those capabilities? We might visualize it this way:

W171026_MARTIN_HOWSTRATEGY

No matter where you are in the organization, the choices are the same:  they are all where to play/how to win strategy choices. The arrows show that you can't either start at the top and proceed downward or start at the bottom and proceed upward. You have to toggle back and forth until the choices fit with and reinforce one another.

And that is why I describe leadership in this layered choice cascade as follows:

  1. Make only the set of choices you are more capable of making than anyone else.
  2. Explain the choice that has been made and the reasoning behind it.
  3. Explicitly identify the next downstream choice.
  4. Assist in making the downstream choice, as needed.
  5. Commit to revisit and modify the choice based on downstream feedback.

Until recently, I had never once heard a useful or compelling definition of execution that distinguished it from strategy – literally not a single one. That is, until in a strategy seminar for senior Verizon executives, a young executive named Andres Irlando offered up the following: Roger, wouldn't you call execution the act of setting up that series of choice cascades, identifying the manager responsible for the choices in each cascade, and following up to ensure that they make the choices for which they are responsible?

A brilliant answer!  Strategy is the act of making choices about "where to play" and "how to win" across the various levels and parts of the organization.  Execution is the act of parsing out responsibility for those choices, making sure people actually choose (instead of waffling around in indecision).

This reverses the normal implied responsibilities. While the traditional definitions hold that strategy is done at the top and execution is done below, in this alternative, more useful definition, strategy choices are made throughout the organization and the responsibility for execution lies at the top.

If there's a "strategy-execution gap" in a company you lead, consider whether adopting the role of the executor would help you close it.


A Primer on Deep Learning | Deep Learning Platform - DataRobot

A Primer on Deep Learning | Deep Learning Platform - DataRobot

This is what the hype is about

Deep learning has been all over the news lately. In a presentation I gave at Boston Data Festival 2013 and at a recent PyData Boston meetup I provided some history of the method and a sense of what it is being used for presently. This post aims to cover the first half of that presentation, focusing on the question of why we have been hearing so much about deep learning lately. The content is aimed at data scientists who might have heard a little about deep learning and are interested in a bit more context. Regardless of your background, hopefully you will see how deep learning might be relevant for you. At the very least, you should be able to separate the signal from the noise as the media hype around deep learning increases.

What is deep learning?

I like to use the following three-part definition as a baseline. Deep learning is:

  1. a collection of statistical machine learning techniques
  2. used to learn feature hierarchies
  3. often based on artificial neural networks

That's it. Not so scary after all.  For sounding so innocuous under the hood, there's a lot of rumble in the news about what might be done with DL in the future.  Let's start with an example of what has already been done to motivate why it is proving interesting to so many.

Save the whales!

What does it do that couldn't be done before?

We'll first talk a bit about Deep learning in the context of the 2013 kaggle-hosted quest to save the whales.  The game asks its players the following question: given a set of 2-second sound clips from buoys in the ocean, can you classify each sound clip as having a call from a North Atlantic right whale or not? The practical application of the competition is that if we can detect where the whales are migrating by picking up their calls, we can route shipping traffic to avoid them, a positive both for effective shipping and whale preservation.  

In a post-competition interview competition's winners noted the value of focusing on feature generation, also called feature engineering. Data scientists spend a significant portion of their time, effort, and creativity working on engineering good features; in contrast, they spend relatively little time running machine learning algorithms. A simple example of an engineered feature would involve subtracting two columns and including this new number as an additional descriptor of your data. In the case of the whales, the winning team represented each sound clip in its spectrogram form and built features based on how well the spectrogram matched some example templates. After that, they then subsequently iterated new features that would help them correctly classify examples that they got wrong through the use of a previous set of features.

The final results

This is a look at the final standings for the competition.. The results within the top contenders were pretty tight, and the winning team's focus on feature engineering paid off. But how is it that several deep learning approaches could be so competitive while at the same time using as few as one fourth the submissions?  One answer to that question arises from the unsupervised feature learning that deep learning can do. Rather than using data science experience, intuition, and trial-and-error, unsupervised feature learning techniques spend computational time automatically developing new ways of representing the data. The end goal is the same, but the experience along the way can be drastically different.

Not the same

This is not to say that 'deep learning' and 'unsupervised learning' are necessarily the same concept. There are unsupervised learning techniques that have nothing to do with neural networks at all, and you can certainly use neural networks for supervised learning tasks.  The takeaway is that deep learning excels in tasks where the basic unit, a single pixel, a single frequency, or a single word has very little meaning in and of itself, but the combination of such units has a useful meaning. It can learn these useful combinations of values without any human intervention. The canonical example used when discussing the deep learning's ability to learn from data is the MNIST dataset of handwritten digits.  When presented with 60,000 digits a neural network can learn that it is useful to look for loops and lines when trying to classify which digit it is looking at.

Learning accomplished

On the left, the raw input digits. On the right, graphical representations of the learned features. In essence, the network learns to "see" lines and loops.

Why the new-found love for Neural Networks?

Is this old wine in new wineskins? Is this not just the humble neural network returning to the foreground?

Neural networks soared in popularity in the 1980S, peaked in the early 1990s, and slowly declined after that. There was quite a bit of hype and some high expectations, but in the end the models were just not proving as capable as had been hoped. So, what was the problem?  The answer to this question helps us get around to understanding why this is called "deep learning" in the first place.

What do you mean, 'deep'?

Neural networks get their representations from using layers of learning.  Primate brains do a similar thing in the visual cortex, so the hope was that using more layers in a neural network could allow it to learn better models.  Researchers found that they couldn't get it to work, though. They found that they could build successful models with a shallow network, one with only a single layer of data representation. Learning in a deep neural network, one with more than one layer of data representation, just wasn't working out. In reality, deep learning has been around for as long as neural networks have – we just weren't any good at using it.

Shallow Neural Network

Deep Neural Network

Deep neural networks have more than one hidden layer. It really is that simple.

So, what changed?

The Trinity of Deep Learning

The Fathers of Deep Learning

Finally in 2006 three separate groups developed ways of overcoming the difficulties that many in the machine learning world encountered while trying to train deep neural networks. The leaders of these three groups are the fathers of the age of deep learning. This is not at all hyperbole; these figures ushered in a new epoch.  Their work breathed new life into neural networks when many had given up on their utility. A few years down the line, Geoff Hinton has been snatched up by Google; Yann LeCun is Director of AI Research at Facebook; and Yoshua Bengio holds a position as research chair for Artificial Intelligence at University of Montreal, funded in part by the video game company Ubisoft. Their trajectories show that their work is serious business.

What was it that they did to their deep neural networks to make it work? The topic of how their work enables this would merit its own lengthy discussion, so for now please accept this heavily abbreviated version. Before their work, the earliest layers in a deep network simply weren't learning useful representations of the data. In many cases they weren't learning anything at all. Instead they were staying close to their random initialization because of the nature of the training algorithm for neural networks.  Using different techniques, each of these three groups was able to get these early layers to learn useful representations, which led to much more powerful neural networks.

This is what the hype is about

Each successive layer in a neural network uses features in the previous layer to learn more complex features.

Now that this problem has been fixed, we ask, what is it that these neural networks learn? This paper illustrates what a deep neural network is capable of learning, and I've included the above picture to make things clearer. At the lowest level, the network fixates on patterns of local contrast as important. The following layer is then able to use those patterns of local contrast to fixate on things that resemble eyes, noses, and mouths. Finally, the top layer is able to apply those facial features to face templates.  A deep neural network is capable of composing more and more complex features in each of its successive layers.

This automated learning of data representations and features is what the hype is all about.  This application of deep neural networks has seen models that successfully learn useful representations of imagery, audio, written language, and even molecular activity.  These have been previously been some hard problems in machine learning, which is why they get so much attention.  Don't be surprised if deep learning is the secret ingredient in even more projects in the future.

The above touches on most of the points I made in the first half of the presentation, a presentation I hope makes for a useful primer on deep learning. The key takeaway is that the breakthroughs in 2006 have enabled deep neural networks that are able automatically to learn rich representations of data. This unsupervised feature learning is proving extremely helpful in domains where individual data points are not very useful but many individual points taken together convey quite a bit of information. This accomplishment that has proven particularly useful in areas like computer vision, speech recognition, and natural language processing.

The second half of the talk was a whirlwind tour through the topics that fall under the umbrella term of 'deep learning'. Feel free to  contact me by email or leave a comment below if there are any questions you have or if you'd like pointers on where to find additional material.


Ask The Thought Leaders: What’s The Future of Cybersecurity?

Ask The Thought Leaders: What’s The Future of Cybersecurity?

As we become increasingly dependent on technology in our daily lives we open ourselves up to an entirely new kind of threat, cyberattacks..

While in the late 90s and early 2000s cybersecurity went as far as your company's IT guy, today it's a multi-billion dollar global industry that is expected to top $1 trillion by 2020. Whether it's an email scam targeted at individuals or corporate data theft affecting millions of people at one time, the rise in cyberattacks and their increasing reach has made cybersecurity a very hot topic.

When we started thinking about cybersecurity and where it's heading, one of the first issues brought up was the internet of things. Someone tampering with your computer while you're surfing the web is an inconvenience, but what about someone hacking into your car while you're driving down the highway?

So, in an effort to ease our fears and gain a better perspective we decided to ask a group of cybersecurity experts…

What's the future of cybersecurity?

It was not an easy question to answer. Here's what they had to say…

"In 10-15 years, we will be deep in a 'war of the machines' era with advances in artificial intelligence bringing fast and sophisticated execution of security defense and cybercrime. This will be a battle of AI vs AI. The availability of low cost computing and storage, off-the-shelf machine learning algorithms, AI code and open AI platforms will drive increased AI use by the good guys to defend and protect – but also increase deployment of AI by the bad guys. There will be sophisticated attacks launched on a grand scale, quickly and intelligently with little human intervention, that compromise our digital devices and web infrastructure. Cybercriminals will create fully autonomous, AI-based attacks that will operate completely independently, adapt, make decisions on their own and more. Security companies will counter this by developing and deploying AI-based defensive systems. Humans will simply supervise the process."

"Employers will look further outside of IT for tech talent. Organizations aren't just looking for the standard computer engineer anymore. While they still need the engineers, the developers, data scientists and the technological tools to write, pull and track data, the need to have professionals who can make sense of all of that data and communicate it back to the executive team in business terms that they can understand is becoming increasingly important. How someone is able to present technical information and frame it as a business problem is going to be in high demand. We will see more organizations looking outside the traditional skill set to cultivate the next generation of cybersecurity professionals."

"A big trend I see is a focus on service resilience, i.e., making it so that a DDoS can melt one provider or one datacenter, but your service will automatically migrate to another site that can serve the same content. Expect resilience, as opposed to prevention, will become more talked about."

"We can expect the unexpected. I never would have predicted last year that we would be talking about the DNC and hacking of elections. Expect new trends to come out of left field. Ransomware will be on the upswing and evolve in new unforeseen ways. It will be more targeted and focus on more valuable targets as we saw with healthcare. And it will continue to attack new, more damaging industries like we recently witnessed with San Francisco BART and Muni. Like the attacks with Krebs and Dyn, DDoS is coming back in a big way. Thanks to the proliferation of insecure things on the Internet, the risk of crippling cyberattacks will only increase."

"Blockchains are moving from the realm of just fueling cryptocurrencies like Bitcoin to providing smart contracts, identity management, and multiple ways of proving integrity of data. They may also hold the key to defending against IoT attacks.

Quantum computing will have possibly the biggest impact within 10 years. Most over-the-wire encrypted transmissions collected over the next decade will be readable, and even private keys will be reversible from public blockchains (for example, you can spend someone else's Bitcoin). Post-quantum safe crypto will be a must.

AI will be used to identify hacking flaws and patch them to stay ahead of malicious attackers."

"The top challenge for cybersecurity isn't preventing data breaches, stamping out ransomware, or preventing ever-more-massive DDoS attacks, it is securing our digital privacy. 2017 and the years to come will dictate the future of cybersecurity, and most importantly human privacy. Digital threats have evolved quickly and can wreak havoc on our lives, endangering our personal privacy and the privacy of those around us.

To tackle this important issue, we need the national government to take a stance on what our digital privacy is. Is it an immutable human right? If so, there needs to be explicit legislation that goes beyond what is currently in place. It needs to protect each and every citizen and hold those who might put our privacy in jeopardy accountable for their actions. This will be the most important cybersecurity decision in the next year and it will shape the security landscape for years to come."

"The future that I see is Universal Second Factor Authentication as standard on all logins that contain sensitive information.. U2F is similar to Two Factor authentication (2FA) but more secure.

Whilst 2FA is better than nothing, it is inherently insecure. The most popular methods like Time-based One-Time Password (TOTP), used by offerings like Google Auth services, transmit a shared secret master key) over the internet during the setup process.

This weakness is now being recognised more than ever with companies like Dropbox partnering with Intel on U2F after their hacking issues last year. Universal Second Factor (U2F) outperforms 2FA because it never reveals sensitive information.

* No shared secret (private key) is sent over the internet at any time.

* No sensitive or confidential information shared due to public key cryptography.

* It's easier to use as there is no retyping of one-time codes.

* No personal information is associated with the secret key.

Because there is no secret shared and no private databases stored by the provider, a hacker is not able to steal the entire database to get access. Instead, they would have to target individual users, and that is much harder."

"There will be a shift in the basic human feelings of security as crime becomes more focused on the cyber domain. New threat models will arise, and cyber criminals will get their inspiration from the IT world (ransomware, APTs, and more).

One major target area will be vehicles. With every car now connected, each one is a potential target. Vehicles are controlled by Electronic Control Units (ECU), and cyber security will become an integrated part of every ECU, just as security is embedded in any PC or organizational network. Consumers will see it as standard just like seat belts, ABS, and other automotive safety elements.

This will also change whole industries. For example, insurance models will shift from covering the decreasing number of car accidents, and instead focus on data breaches and accidents that are a result of bugs or hacks."

"While the lone hacker will disappear in favor of ever more organised cyber criminals, the net threat to organisations will remain neutral as the industry-wide information security skills shortage will narrow due to an improved focus within the educational establishment, as well as top salaries being paid to the best quality professionals.

Hacktivism will become a bigger headache for politicians, however, as the march of globalization leaves more and more people feeling disenfranchised and powerless to be heard through conventional means.

The key business threats today will be the key threats of the next two decades as well. While unsophisticated, phishing attacks will always be a cheap and effective money-generating threat and ransomware's use of encryption will make it hard to discount any time soon.

Beyond that, the one certainty is that information security will, and always will be, a reactive solution to all new emerging threats."

"The multi-million dollar ransomware industry has grown and will continue to grow with amazing speed in the years to come, thanks in part to the spread of untraceable cryptocurrency such as Bitcoins and the proliferation of ransomware kits on the dark web, which allow anybody, even script kiddies with no programming skills, to put together and reap the financial rewards of ransomware attacks.

Ransomware is increasingly targeting organizations in the financial and healthcare industries. These organizations often have thousands or even tens of thousands of gigabytes of customer/patient data they cannot afford to lose–which makes them all the more willing to pay handsomely to get their data back at any cost."

"CISOs will drive cybersecurity as a strategic and integral part of the greater organization and will switch their solutions to those that properly protect against advanced attacks, seeking out technologies monitoring the entire threat life cycle – from initial malware delivery to callbacks and data exfiltration."

"IoT will overtake everything else in connected devices and not only will be the most hacked stuff, it will continue to be the hardest to protect. This will turn cybersecurity on its head because security on all IoT is terrible, and totally opaque to users. It's take it or leave it. You can't harden the devices after the fact. You can't even log into them. You just have to hope they are secure and your perimeter can stop all attacks.

Building secure, hardened IoT devices from the start is ultimately the best solution. One new challenge will be that IoT devices will have encrypted connections (or they should!). It will be effectively impossible for any network based device like a firewall to see inside that session. There are some SSL/TLS interception methods that can be used, but that requires the devices to trust the interception device. Harden your IoT now.

"It used to be that security concerns were the biggest impediments to Public Cloud adoption. But, in 2017, that will no longer be the case. It is widely accepted that security in Public Clouds is strong, shifting the top concern to compliance. Organizations moving to the cloud need to be able to demonstrate and provide assurance that they are doing things in a secure and compliant manner. So, whether it is PCI, HIPAA, NIST-800 53 or internal compliance standards, organizations need to be able to demonstrate that they can maintain compliance throughout the fast-pace of change that takes place in the Cloud. To solve this, they will have to turn to security and compliance automation solutions that will help them measure and report with ease."

"Crown jewels" in the cloud. Enterprises will also move beyond using the public cloud solely for test/dev or burst capacity purposes. And again, because they want to benefit from the elasticity and the capacity on demand the cloud has to offer, they will now be looking to leverage IaaS for hosting always-on, mission-critical, Tier-1 applications—aka the crown jewels."

"The future of cyber security in 15 years is on the one hand going to sound like science fiction, and on the other hand sound like it's all still today. Technically, you will see massive developments in AI/machine learning, human/machine interface and hundreds of billions of IoT devices.

AI's will be hacked and subverted, which will require a new breed of "AI Auditors" who will test AIs for ethical behavior. With neural interfaces, cyber criminals will be able to feed false data direct into someone's brain, and people need to be trained to recognize this and not act on it. IoT devices will be smart enough to provide limited AI functions, but those again are suspect and could easily be hacked.

People will still be social engineered like they are today, and only require more and more sophisticated training to recognize hacking attempts."

"If you had asked me 15 years ago, if some companies would still be using IBM AS400 mainframes to run business applications, I would have thought you were crazy. As we look 10-15 years forward at the cyber security landscape, I think the big assumption we can make is that history will continue to repeat itself. We will have a legacy environment that will be difficult to protect because of outdated, no longer supported hardware and software, maybe even still the IBM AS400. I am 100 positive, the cutting-edge computing environment, potentially quantum computing, again, will NOT be designed with security. As a result, we will have to figure out how we retrofit a security framework on something not designed to be protected. The good news is that I believe Artificial Intelligence and Machine Learning will actually start yielding capabilities that will finally provide real help in defensive operations."

"Cybersecurity is getting complex as the number of cyber threats are growing. The Internet of Things (IoT) is bound to add mountains of challenges for cyber security. There are more and more cyber specialists who are starting to search for a more mature approach to identify and deal with cyberattacks.

Most of the tools that are used are only capable of identifying threat signatures. As they do this, the tools try to identify a pattern that has been used in previous attacks. But these tools or approaches fail to identify the new threats.

Therefore, experts feel that one of most efficient ways to manage the looming threats, in the days to come, is through analytics and automation. The premise is to identify the cyber risks and intrusions, and detect attacks, with the help of predictive analytics. The ideal cybersecurity future environment will offer a combination of complex human and machine intelligence, automated and analytics-driven alerts and an effective security mechanism."

"There will be a shift in focus from broad-based attacks to more targeted attacks against specific firms or individuals. The best evidence of this is the IP theft against law firms, insider spoofed spear phishing to finance and HR people, ransomware targeting healthcare after methodist paid out."

"The one size fits all security paradigm will disappear. The old-school (useless) compliance mandates per vertical – FISMA for financials, HIPAA for healthcare etc. will disappear. Vendors will no longer be able to provide a product or service that is uniformly accepted (or reviled) by consumers or enterprises. Instead a new form of customized security will emerge. This will be based on a combination of end customer's self-assessment of risk tolerance and machine learning based on that customer's public past history and industry best practices – and voila a risk score will be attributed to the customer. This score will be the barometer to deliver a 'custom' security solution to the end customer by vendors. Vendors will be able to choose which risk scores they are able to fulfill and that becomes their target customer base. End of story."

"Many traditional concepts will be hopefully gone. Perimeter security, storage-only encryption, access control based on privilege records, authentication that relies on one strong factor, DMZ – they will fade out or vanish completely.

Many new techniques will arise through machine learning and weak AIs, especially in intrusion detection and making sense of large-scale monitoring and signal analysis. Many new techniques will arise from advancements in cryptography and collective effort to eliminate poor cryptography. Still, we will have snake-oil products and systems.

Attackers will still be ahead of the game because security is asymmetric in effort and success criteria between attacker and defender.

With proliferation of IoT and a bunch of computers in every device, the damage will get physical. Growing complexity of real-world processes, intertwined with complexity of security protocols protecting them, will lead to many new challenges in practical use cases for security tooling."

"New applications and services are released every day, but the backbone infrastructure has stayed the same for a very long time. We are at a point where patching the system with add-ons is not good enough – we need a rebuild to get to the next level, to be able to guarantee the service that we are so dependable of, address crime and guarantee fundamental security. There is no longer a reason for most traffic to be unsecure and unauthenticated on the Internet.

The IoT and IA trends are still in early stages and will continue to grow rapidly, but so far with very little concern about security. Until we address these issues, I see an exceptionally bright future for the cybersecurity and IAM industry!"

"Smart-connected home device shipments are projected to grow at a compound annual rate of nearly 70% in the next five years, and are expected to hit almost 2 billion units shipped in 2019—faster than the growth of smartphones and tablet devices. Given the diversity of operating systems and lack of regulation for these smart devices, there will be more large scale hacking attacks due to IoT device compromises. Wi-Fi and Bluetooth networks, however, will become polluted and clogged as devices fight for connections. This will, in turn, push mission-critical tasks to suffer.

However, the likelihood that a failure in consumer-grade smart devices will result in physical harm is greater. As more drones encroach on public air space for various missions, more devices are used for healthcare-related services, and more home and business appliances rely on an Internet connection to operate, the more likely we will see an incident involving a device malfunction, a hack, or a misuse that will trigger conversation on creating regulations on device production and usage."

"Over the next ten years, better cyber security orchestration will augment gaps between AI and human incident response. Current SIEM and IDS systems without AI create too much noise for humans to filter, and straight AI cannot differentiate threats well enough to take accurate automated action. AI and Deception Technology will be used in conjunction with well-trained human security specialists to respond to perceived threats on the fly.

The current time-to-detection of a security breach is 203 days. Time-to-detection will be lowered to seconds by companies that embrace human and AI orchestration. Breaches will continue to occur at unacceptable rates until regulations force threat hunting and advanced incident response activities. March 1st, 2017, the New York Department of Financial Services passed cyber regulations that are meaningful but aren't overly burdensome. Over the next ten years, I see similar laws passing, forcing better cyber security."

"Over the next decade, the biggest security risk I see is relying on perimeter-based technology to keep data "locked in" the enterprise. I hear security teams voice concerns with "sensitive data leaving the company" and the need to keep it protected. To improve privacy and confidentiality, we must first shift our focus from securing the perimeter–the network, applications, and endpoints–and focus on protecting data directly.

Secondly, we need to adopt intelligent and automated security systems. Automation means investing in tools that automatically secure data based on location, context, the recipient, the user's identity, and more importantly, tools that don't require constant human interaction. We simply cannot rely on employees or our partners to do the right thing.

Finally, we must start protecting the integrity of data. Without proper encryption, access control or identity-aware systems in place, we leave ourselves open to having information manipulated in malicious ways."

"From being in the industry for over 20 years, you must differentiate between threats, controls and how you identify, select and manage both. The core threats haven't changed much: they're about getting someone with access to something you want to help you get it. What's changed is the way that happens.

Technology change enabled new access paths and a dramatic increase in attacks, but fundamentals are the same. Regrettably, the core approach to managing cyber risk has hardly changed at all. This is where the biggest changes should take place, but it's hard to say if it actually will.

Unfortunately, I expect the industry to continue to put too much emphasis on technical aspects and not enough on how to protect the businesses and people at risk. Technology is sexy and easy to identify. Risk management and organizational change are much harder, but that's where the focus must be."

"Organizations will need to place a focus on shifting from promoting awareness of the security "problem" to creating solutions and embedding information security behaviors that affect risk positively. The risks are real because people remain a 'wild card'. Many organizations recognize people as their biggest asset, yet many still fail to recognize the need to secure 'the human element' of information security. In essence, people should be an organization's strongest control.

Instead of merely making people aware of their information security responsibilities, and how they should respond, the answer for businesses of all sizes is to embed positive information security behaviors that will result in "stop and think" behavior and habits that become part of an organization's information security culture. While many organizations have compliance activities which fall under the general heading of 'security awareness', the real commercial driver should be risk, and how new behaviors can reduce that risk."

"It's great that we are able to even have a discussion about cyber security, because until recent years it's been neglected and not a high priority to citizens or the companies. The tides are turning very quickly to where it is a high priority, so let's discuss the future. The future is not entirely human based cyber security. Many have already released their artificial intelligence for security. I am not worried about that, because we are also working on our AI as well. What this type of AI is, isn't having conversations with people it's doing analysis of data and finding the issues. We haven't released our AI yet, but it said to have discovered 30,000 vulnerabilities last time we checked that aren't known to the public or disclosed to the companies."

"In 10-15 years, cybersecurity might be about preventing 'real' identity theft. In 2017, we call theft of social security numbers and passwords 'identity theft'. But what if criminals could steal not just these, but also our fingerprints, our brain waves, and even our genetics? This could happen, as passwords get easier to crack. First, we'll shift to using biometrics like fingerprints and iris scans to authenticate ourselves online. But once hacked, we can't change these things, so we'll have to abandon them. We might switch to new methods of authentication, through brain wave sensors or genetics. But these can be hacked too. And the more information we provide, the closer criminals will get to capture our essential selves."

"Since humans do make mistakes, social engineering will continue to be an effective form of attack in the years to come, no matter the technology controls put into place. It has been long past time for organizations to put more focus on the human side of their security program.

Any security program can benefit immediately by beginning a review of their own internal policies, improving the types of metrics used to measure the success of the program, and consulting with legal counsel to ensure proper insurances and other risk mitigation plans are in place. These activities cost very little, have immediate turnaround timeframes, and can deliver quite a lot of return to the organization.

Perhaps most importantly is to comprehend the behavior of their employees and implement programs to help them work and operate in a more secure manner. Security awareness training and education programs may not be the glitziest pieces of a security program, but they are critical to its success. Even beyond that, is to involve employees more directly and understand why social engineering attacks work on them and to help address any questions and concerns."

"Regulatory compliance is not just the buzzword du jour when it comes to cybersecurity-compliance is the undeniable future of any company, small or large, especially one seeking a government contract. Recent federal compliance rulings have already significantly impacted many SMB contractors; and as cybersecurity threats continue to proliferate and computer technology and digital culture continue to advance, federal compliance regulations are scheduled to grow. The next 10 to 15 years forecast dramatic revolutions in technology-including advancements in the Internet of Things (IoT) and the growing number of smart connections. As a result, cybersecurity will need to become smarter too. Industries with diverse needs, namely manufacturing, banking, healthcare, higher education or law, will be required to armor themselves with complex cybersecurity solutions, in order to fend off attacks from smart technologies, and federal compliance regulations will serve as guardians in this brave new world."

"The future of cyber security is an increasing shift away from stronger perimeters and better intrusion detection, to shared notions of identity and reputation rooted in globally accessible systems like DNS. While content-scanning will still have an important role to play in protecting organizations, the first line of defense against attacks will be the ability to verify the source of a request and the reputation of the originator. Security will increasingly become a concern of the internet ecosystem as a whole, and the corresponding solutions will look more like 'vaccines' – they will benefit not only the organization that deploys such a solution, but also the parties with whom they interact."