Chapter 8

How to Build the Future

Subscribe and stay connected to the cloud

It’s increasingly obvious that technology will dominate our future. Technically, that future is already here. Self-driving cars, intelligent appliances, drone photography, eCitizenship, digital currency, augmented reality, virtual reality, online collaboration tools, video calls, instant video and music services, AI-created art, online courses, lenses that detect disease, 3-D printing…. I could keep going, but you get the point. There’s a lot of technology that touches our lives today, and much of it is indistinguishable from magic. To put how far we’ve come into perspective, consider that little smart phone in your pocket. It’s approximately 120 million times more powerful than the computer that powered the Apollo 11 moon mission, and the software in it is far more complex than the software that guided the spacecraft to the moon.

The cloud has fundamentally changed things, and our ability to understand and leverage it is becoming increasingly valuable. Equally important to understand are the other trends underlying how great software is built. These trends include the human and machine aspects of software development. Until recently, software was all about code written by humans. This is changing to the software itself learning its own job. Neural networks, also known as artificial intelligence (AI), play a part in so many products today. They give machines abilities like we’ve never seen before.

These changes, along with the massive demands imposed by software, especially popular software, can be stressful to the humans who develop it. The importance of small passionate teams and developer happiness cannot be understated when talking about alleviating this stress and building high-quality products. The cloud helps here too. If we think about a software development team as a team of craftspeople, we can view leveraging a managed cloud as a way to allow them to focus their craft on the core product with the best possible tools. All the best products today are built by the coming together of code and design. For example, it was design and the need for an enhanced user experience that brought us instant search on Google. The code just made it possible.

We go about our day interacting with so many products—the card swipe to get into the office, the music on the drive to work, the meeting event added to your calendar, etc.—it’s all generating data. This is a world owned and controlled by data. We just live in it. Data is almost more valuable than the code that helped create it. I can say with strong conviction that the products of the future will be built with data, not code.

Code vs. AI

Almost all technology today is powered by software. Lines and lines of code written by smart engineers to do amazing things. At least, that’s how software is built. Human-written code running on your computer helps you with word processing, managing your accounts, etc. Within the past ten years, this changed from running on your computer to running in the cloud. Now software can leverage the power of not just one or two computers but tens of thousands of computers. This gave birth to much larger platforms and applications such as Facebook, which is capable of serving billions of people simultaneously.

As we migrated to using these cloud-powered applications, we began generating data such as search keywords, our favorite people, our photos, our reading interests, our music choices, sales data, etc. Today, our personal life and work life alike are in the cloud. The volume of data we’re generating is growing exponentially, and so is the complexity of the software powering it.

In the past couple years, the rise of data has made way for a new wave, the wave of AI/neural networks and machine learning. Sophisticated models called neural networks are increasingly powering things around us, moving us from a hand-coded world to a data-inferred one. In this kind of world, it’s not the lines of code that a human engineer writes that get to decide things. Instead, that’s left to the knowledge learned by an algorithm from vast amounts of data. One really great example of this is the Google Photos application, which uses very complex neural networks (AI) to detect what’s in your photos so that it’s searchable. Searching for “cats,” for example, quickly pulls up all the pictures with cats in them. This wouldn’t be possible if it weren’t for the cloud and its ability to scale to accommodate the increasing need for computing and data storage resources.

The AI-powered future only drives up our demand for computing resources. The current generation of microprocessors that power our laptops, phones, and servers are not fast enough for the math behind neural networks. This has led to a renewed interest in 3-D graphics cards (graphics processing units, or GPUs). These cards are specialized processors that are popular with teens and young adults, as they are responsible for the realistic scenes in today’s video games. If you’ve ever played a video game on your desktop such as Need for Speed or a simpler one on your mobile such as Angry Birds, then you’ve made use of a graphics processor. The ability of these processors to do thousands of computations in parallel, unlike standard microprocessors, which can do only a handful in parallel, is what makes these attractive to developers working on AI.

Companies such as Facebook and Google are making their AI-building software tools (Torch and TensorFlow, respectively) freely available. This coupled with the easy availability of capable hardware has sparked a massive global wave of startups and businesses trying to leverage AI to create value. We are entering an era where AI software will be able to do things that we previously expected humans to have a long-term monopoly on.

The Team

If software will power our future, then it’s important to understand how it is built, why it is built, and by whom. My experience from across Silicon Valley tells me that the best software is built by small passionate teams who deeply understand the problem and the customer. Even larger projects, when broken down into smaller manageable components, can effectively be tackled by small teams. The key here is small teams, and constraints foster a high level of creativity. Some of the greatest software products of our time, WhatsApp and Instagram, were both created by small teams.

In Silicon Valley, software teams usually consist of a product manager, a few developers, a designer, and software testers. Constraints usually include people, time, money, and expertise.

The product manager is tasked with planning out the project, managing timelines, communicating statuses across the rest of the organization, talking to potential customers, and having a deep understanding of the problem space. The product manager depends on the developers to understand the scope of the actual software development tasks and works with them to break them down into manageable chunks.

The developers are tasked with building the product. They shoulder the largest part of the work involved in getting things done. It’s important that the developers are experienced enough to work with the product manager to define realistic timelines, and that requires them to understand the complexity of the tasks involved.

The designer tries to understand how the product influences the user and how design can help ensure a pleasant low-friction experience. Design is often considered an additional competitive advantage, as customers often choose products that help them get their job done with the least amount of stress.

The software testers, or test engineers, are responsible for ensuring that new code changes don’t break the existing product, reporting bugs to be fixed, and helping the team build a high-quality product while maintaining a high throughput.

Some of surest telltale signs of poor quality software are a lack of thought given to the user experience and an inability to leverage data to improve the product. Since the nature of software is changing from being code-centric to being data-centric, the teams building it will also have to evolve. To build high-quality software, teams will have to be led by a design-centric product leader who wears two hats, one as a traditional product manager and one as a designer. This product leader will be responsible for combining a deep knowledge of the problem space and the customer with a deep understanding of the solution. These are common traits of the best product leaders today. The ability to own the technology and the experience is how some of the best products are built. Innovation happens when you can connect the dots.

Another new addition to the team will be the data engineer, who will take the place of the traditional software engineer. This will be a strong software engineer with the additional skill sets required to tackle machine learning problems. This person will be able to combine knowledge of data with frameworks such as TensorFlow and Google Cloud Dataflow to build large-scale neural networks and data processing pipelines. In the software world, most of the low-hanging fruit has been picked. The next generation of innovative products will need a new breed of engineers.

Now on to the testers, or the people responsible for maintaining the quality of your product. Most companies employee many testers (quality assurance) to manually test their product every time new changes are ready to be launched. Part of the testing is to ensure that the new feature works as expected. The other part, regression testing, is to ensure that the existing functionality isn’t broken. Within startups, I’ve been seeing a trend where testing is going from a manual exercise to a fully automated process. The Google Cloud already provides tools such as the Firebase Test Lab, which automatically tests your app by simulating a human user. It operates the app thousands of times faster and more extensively than a real human could, logging every possible issue it encounters. Testing the code changes made by your developers is also on a path to more automation to let programming languages and the computer do more of the work. This can be done with tools such as QuickCheck, which automatically creates thousands of test scenarios, or with the adoption of more functional and strongly typed languages such as Haskell, Go, or Java 8, which will save your developers time and make your software resistant to bugs. The cloud also does its part with services such as the Google Cloud Test Lab, which fully automates rigorous testing of your app with minimal effort from your team. Letting machines handle product testing and code quality is a better way to do things, and it results in higher quality software while keeping your team lean and focused.

Although solving the hardest problems will still require you to have the smartest and most skilled team, leveraging the cloud will make a large part what we call software development today accessible to less sophisticated teams. Managed services are increasingly simplifying access to these advanced capabilities, making them accessible to people with limited coding skills. I can see how, in even a couple of years, people who want to build a simple data capture app will be able to create beautiful, fast, sophisticated experiences without writing a line of code. Google Cloud services such as Firebase are already on a path to making this happen.

I can imagine a small city attempting to build an app to help its residents file complaints. The team building this might not have to do anything more than check some boxes. A managed service would tie together the Google Speech API, Natural Language API, Translate API, and Firebase to produce an app capable of taking a spoken complaint, translating it, classifying the complaint type, and storing it in a cloud along with the audio for the city to follow up on. To take this scenario to the next level, the city could make this data publicly available using a feature such as BigQuery’s public datasets. Across the world, a small startup of AI developers using a service such as Google Cloud Machine Learning could use this data to power a new wave of predictive management to help the city target their maintenance work before the problems arise, thereby saving millions of tax dollars.

The collection, storage, and management of data will be ubiquitous, which is why the managed cloud will prevent your team from trying to reinvent something, and you can instead focus on the core value of your product.

The Importance of Design

The best products are those that deeply understand their users and allow them to achieve their goals with the least amount of friction. When looking at a product such as Google Search, most people talk about the engineering innovations Google had to achieve to be able to search the whole web and produce relevant results. This is true, but Google didn’t initially have the engineering scale it does today. A lot of its early success was from PageRank, the search relevance algorithm created by its founders, which ran on a few computers at Stanford.

Another factor of Google’s success that’s often overlooked is their history of novel design. The early search world was filled with messy-looking, bloated websites, where search was just one of the features. Google changed all of that with a clear, fast, search-focused experience. A simpler UI allowed them to keep their site fast on all kinds of devices and Internet speeds, which resulted in more people using them for web searches. Even today, with Google having come so far, their search UI has not changed a lot visually. It’s still simple and focused.

Design isn’t just about making things look attractive. It’s about the entire experience. For Google, what’s changed from its early days is the search experience. It’s now instant. Every character you type generates new search results. Google Instant, although a mammoth engineering task to get right, was driven by design, a need to improve the user experience. I won’t be surprised if designers are the product managers of the future. Good designers need to have a deep understanding of the problem and the customer and an ability to conceptualize the whole product and evolve it to fulfill the market’s needs.

Design is a competitive advantage in the enterprise and in the consumer space. One of the reasons why Snapchat gained prominence is because the app was designed to appeal to the creativity of younger people and maybe be a difficult experience for adults. Having a platform all to themselves, separate from parents, surely sounded attractive to younger people.

Another design innovation that touched the whole world is Facebook’s News Feed, which helped Facebook stand apart from competitors such as Myspace. The feed connects people through news updates from their friends and family in an easy-to-consume format. The feed, which at first doesn’t seem like much, was a game changer. Its use on mobile displays helped drive Facebook’s engagement numbers skyward. Today, feeds are everywhere. It was a design innovation that everyone copied, including some enterprise apps.

I myself am a big user of Slack, a popular collaboration tool. I’ve used a few similar tools before. Slack’s design is a strong reason why I prefer it. The design that ties together chatrooms, conversations, your other apps, and emotional elements such as emoticons, coupled with instant search, makes Slack a fun, engaging, and useful tool for teams.

Design isn’t just visual. It touches everything that goes into making a product what it is. For example, the very popular Alexa from Amazon is a wonderful product designed to be super simple. It allows anyone within your home to use voice commands to perform everyday tasks such as schedule reminders, conduct web searches, check the weather, or buy milk. Designers kept the voice commands simple, and the device itself doesn’t have unnecessary buttons or a tedious setup process, which helped increase its popularly with its target audience. Although the managed cloud can’t directly help you improve your design, it will certainly help you focus more resources on implementing your designer’s vision instead of maintaining backend systems.

Leveraging the Cloud

I’ve often seen software teams approach problems without considering both real costs and opportunity costs. Inefficient code and poor choices aren’t visible in the early stages of a software project or even when the product makes initial contact with its customers, but over time, the costs will begin to reveal themselves. The costs come in various forms. They may occur when developers choose tools that slow down the project, have too much fun with new untested technologies, lose focus of the problem, or try to do too much. In addition to poor choices, there is the problem of expertise lagging confidence, which takes development teams down a path where they try to build things they lack knowledge and competence in. There are also the real costs of lost time and the higher server costs required to run inefficiently built software.

Software development is a complex process, and each type of problem requires a different set of expertise. For example, dealing with large amounts of data requires knowledge of distributed systems and a deep body of knowledge on concepts such as consensus protocols, time and order, and the CAP theorem. UI development has its own complexities and its own technologies such as CSS, ReactJS, Swift, and Android. In addition to the skills needed to build the software, there are others required to deploy and manage the software. Your team will also need the skills to deal with security, load balancers, routing, and system administration to keep your software running.

Most companies have found that it’s just not possible to attract and retain all of this talent. This talent problem has spawned a new breed of developers called full-stack developers. A developer classified as such will have enough knowledge in multiple areas to get the job done. A trait that defines the best full-stack developers is their knowing exactly where their skill set ends and the cloud begins. The best full-stack developers are increasingly focused on the managed cloud. They combine their UI and app development skills with services such as Google App Engine, Container Engine, Docker, etc., to build and deploy state-of-the-art software products.

I’m increasingly convinced that this is a great thing for software, as deep expertise isn’t required for most applications, and the rise of the full-stack developer has helped many businesses use software to improve their products and efficiency. The cloud has helped expand the talent pool of software developers, making it a more accessible profession for everyone. A formal education in computer science should not be a barrier to helping us move to a more software-powered world.

The concept of “lean” is very popular with software projects. It’s when you build a quick prototype to test out an idea and you improve on it quickly until you reach a stage where the product appeals to potential customers. I think highly of this method and encourage more people to adopt it, but I disagree with how it’s often implemented. In cases where the quick prototype requires a software product to be built, developers often build a toy program that breaks as the demands of the product increase. The company then suddenly finds itself iterating on this prototype when building the real product. The fact that this was originally built only to validate the idea is ignored. The company now has two choices: either continue to build on this weak foundation or do a complete rewrite that will add to the costs. This is a terrible position to be in, as both choices are equally bad and fraught with risks. The alternative is to leverage the managed cloud to build a solid product from day one.

The recurring theme of this book is that building on the cloud allows you to focus on your core value. It also allows you to iterate fast until you discover a product/market fit. In the software world, prototypes don’t have to be significantly different from the real thing. When you decide to build a certain product, I would assume that you’ve done some preliminary research, and I would also assume that you’re probably not planning on pivoting randomly from one product to an entirely different one every few days. You can certainly pivot from your original idea, but then it’s usually because your initial hypothesis has failed entirely and you are now pivoting to a new one. Either way, by leveraging the managed cloud, you’ll actually move a lot faster on a more solid foundation. My point can be simplified as “building better software by writing less code.” Reducing your responsibilities, or, in other words, offloading them to a managed service such as the Google Cloud, frees you up so you can focus on the hardest problem of all: making your product successful.

Continuous Improvement

Like food, code gets stale. Okay, it’s not exactly the same kind of stale, but code that’s unable to reach users is untested and can have issues that never surface. Also, code that’s buggy but can’t be replaced easily causes the whole product to suffer. Code, once written, should be in the hands of its users as quickly as possible. When that’s not the case, it hurts the team’s throughput and morale, and it can lead to bugs and, worse, open security holes.

If you’re building software, you cannot leave your code sitting around waiting to reach its users someday. You need a high throughput pipeline like a Formula One pit stop, where, after a quick tire change and safety check, the car is roaring back onto the track. In the software world, this is known as continuous deployment. It’s when every code change is tested and pushed out into the world. The significance of this is easy to understand. It comes down to “build things faster, but don’t break stuff.” The whole process of writing software is about keeping the quality high while not slowing down the pipeline. You should do everything from choosing the programming language to choosing the platform and API while keeping in mind quality control, how much you have to do towards this, and how much is taken care of for you.

At LinkedIn, every code change that we submitted into the codebase would be tested automatically, and if it passed muster, it would flow into to production servers instantly. Imagine me adding a new feature to LinkedIn’s ad serving engines in the morning. By evening, that new functionality would be serving tens of millions of LinkedIn users. I cannot overstate how this positively affected everything from team morale to code quality, leading to LinkedIn product success.

This is quite a major change from how things used to be. Software today is constantly changing, updating, and improving. Tesla pushes overnight software updates and fixes to cars as they charge in their owners’ garages. Apps on your mobile phone, or even the phone’s operating system, is updated and fixed almost daily. How often do you pull your phone out to find an app such as Instagram totally changed with new features? Drones conducting municipal tasks or working hard to map crops on large farms have bug fixes and security updates pushed to them mid-flight. There are even microsatellites (microsats) orbiting thousands of kilometers up in space that receive new code several times a day. Even your favorite web browser updates itself often.

We automatically get new features, security fixes, and new designs all the time, and this makes things so much better for all of us. A recent example of this is when, one morning in early 2016, Tesla owners woke up to find that they could now summon their car from their phone. It’s like something straight out of science fiction. The car opens the garage door, drives itself out, closes the door behind it, and drives itself right up to where you’re standing. This feature update was possible because Tesla continuously pushes new code right to their cars across the world using what they call OTA (over the air) software updates. This means their owners are driving something a little newer and better each morning. On the other hand, it depresses me to think that if my own car (not a Tesla) has any sort of issue, I’ll have to drop it off with the dealer for a few days so they can fix it. Oh well.

Developer Happiness

“Happy developers build better software” is a trend that’s pretty evident to smart software companies. Ensuring that their developers are happy is not something companies cared much about in the past, but that’s changing fast. Developer happiness is not about more parties at the office. It’s about better decisions, fewer meetings, and more focus—essentially, removing all the friction that prevents your development team from doing its job.

Workspaces today are a hive of distractions, including pointless meetings and noisy offices with people talking over you. But it’s not just the environment that’s actively conspiring to distract you. It’s also the tools you use, including your team’s computers and laptops, the communication and collaboration software you use internally, the programming language you’ve chosen for the project, etc. Every little thing adds up. It either drains your developers’ cognitive resources or prevents them from entering the flow—the flow being that state of mind when you’re entirely focused on the problem at hand. It’s a state that allows your mind to do its best work.

Smart companies that want to build great products are obsessed with creating work environments where their best developers can thrive. Moving all communications into cloud-based products such as Slack is a great way to turn the volume down in the room. It’s important, however, to set etiquette guidelines regarding these tools. Some examples include honoring away messages and busy statuses, using text messages before making an audio call, and not expecting to receive instant replies. Slack is a fairly flexible channel. You can instantly share all kinds of things, including images, documents, and links. Animated GIFs and emoticons add emotions and other context to lines of text. Also, new employees can quickly search or scan back through a chat thread to get up to date instead of having to ask around or rely on meetings.

Moving your communications to tools such as Slack and Google Hangouts also opens up your organization to telecommuting (remote work) possibilities. This is a relatively new phenomenon that can add value in so many ways. For example, I’ve known developers who need the quiet comforts of their home to do really deep complex work, and they can jump back in when they need help from others in the team. Another great advantage of telecommuting is the ability to hire experts from anywhere in the world. Say you’re a San Francisco–based startup that wants to hire an AI expert who works at the University of Toronto. You can now do that without requiring him to move across the continent. With everyone working on Slack, he will feel like a part of the team, and it will be easy to coordinate and collaborate with him. Companies such as GitHub and Automattic (WordPress) have taken it to the next level: a majority of their staff work remotely.

Finally, using a managed cloud will provide your developers with state-of-the-art tools that have none of the headaches that come with roll-your-own type solutions. Not requiring your developers to wake up in the middle of the night to deal with trivial issues is a pretty obvious way to keep them happy. Google takes great pains to ensure that the Google Cloud is a highly functional platform with a beautiful and practical UI. Google seems focused on building products that empower developers. For example, they’ve included a single checkbox that controls whether you need the Cloud SQL database to automatically resize its storage when it fills up, and errors thrown by your application are automatically piped into an easy-to-use tracking tool that you can deal with at your convenience. A lot of these features easily handle things that developers traditionally had to deal with on their own.

Managing Costs

In most engineering tasks, whether building cars or bridges, costs are always taken into consideration. The cost of the components, the cost of maintaining or replacing the components—it’s all factored in, and this is a critical part of the development process. However, this is not the case with software engineering. Developers rarely consider the costs involved with their technology choices. I’ve even heard of costs being entirely dismissed almost as if computing resources were free. They’re not, and as with the components in cars and bridges, the costs will add up as you grow.

Although managed services will certainly be cheaper than rolling your own thing, they do have costs. Being aware of these costs and factoring them into your software design decisions provides you with a competitive advantage. Imagine having bad design choices prevent you from experimenting with new business models such as the freemium model while your competitor sails ahead because their engineering team factored in costs. As a result of your poor planning, their product has much wider acceptance, and yours is on its way out.

A great example of this is Dropbox, a service that allows you to store all of your files in the cloud and have them synced across all of your devices. The Dropbox team initially leveraged an external cloud storage solution. In an effort to get profitable faster, Dropbox, which has a world-class engineering team and themselves are considered a managed file storage cloud, built their own storage solution (while keeping an eye on costs) called Magic Pocket. Although this innovation helped in multiple ways, one of the big wins was that Dropbox achieved profitability as a business on the back of their freemium business model, where they gave away massive amounts of free storage to people to gain a much smaller number of paying customers.

Another example is Netflix, who built their entire business on the Amazon Web Services cloud platform. They recently shared with others a tool they built called Ice, which provides them with a detailed look at their usage and costs down to every service within their entire infrastructure.

Managing software-generated costs is a vital part of building the software systems of the future. To put things into perspective in terms of what we get when we leverage the cloud, I used a Google-provided calculator to see what it would cost me to run ten thousand servers in the cloud for about and hour. The answer was US\$10. The question now is what would you do with that kind of computing power?

A managed cloud such as Google’s provides you with many interesting opportunities to reduce your costs. For example, pre-emptible virtual machines (VMs) that can be taken away if Google needs them elsewhere are over 50 percent cheaper than going with dedicated ones. If your developers use these VMs and build your app to be resilient to servers getting shut down you will end up saving money. The Google Cloud also follows Moore’s law with its pricing. The law says that hardware pricing falls 20 to 30 percent annually. Google passes these savings onto the customer by regularly lowering their prices. Also, with certain computing resources such as VMs, there is usage-based discounting, so the more you use it, the bigger the discount.

Another major component of software costs in the cloud is data storage. Here too you can reduce costs by being aware of how quickly you need access to your data. Services such as nearline storage can put rarely accessed data such as audit logs into cold storage for a fraction of the cost until the day you need it.

Again, the cloud helps you save money by allowing you to have a leaner team, and I suspect that companies building on a solid managed cloud will see their costs move on a downward trajectory over time.

The Value of Data

We are surrounded by data, and most of us are losing control over the vast amounts of data we generate daily. Think of your smart phone. How well are you managing the hundreds of photographs you’re taking or the hundreds of emails you receive? And that’s just the data you’re consciously aware of. Every time you do a web search, read an article, research a product, and browse a social network, you generate data about yourself. Every cloud-based product generates data. It’s data the users input themselves, such as the data regarding the contacts in a CRM, the data you generate simply by searching for apps in an app store, or the inferred data generated by Twitter when it shows you a list of people you might like to follow.

There’s a lot of debate about whether this is good or bad and about how we should manage our privacy. For the most part, I want to stay neutral on the topic, as I’ve personally seen how large companies take extreme measures to protect their users’ privacy, going so far as even protecting user data from their own employees. Data is extremely valuable. A large part of LinkedIn’s value is based on it being a system of records for professional data such as connections and profiles. The same can be said about Salesforce, whose value is largely based on it being a system of records for sales data.

At LinkedIn, I saw how they built products from data. For example, in 2013, they launched University Pages, a product to help students find the right university and plan their careers. The value of this product is in how LinkedIn uses data it already has. For example, if you look up the University Page for MIT, you will see it filled with rich data. You will see the companies where MIT alumni work as well as the fields they’re in. It’s easy for you to find notable alumni and see how you’re connected to them, or look up their profile and see their career path. If MIT isn’t the right place for you, then you can easily find similar universities. Nothing like this existed and couldn’t possibly exist if not for LinkedIn’s data.

Sales Navigator is another product that LinkedIn built entirely from data. It’s a sales prospecting tool that allows you to get real-time sales intelligence from your existing network and find decision makers at companies. LinkedIn’s machine learning algorithms locate the signal in the noise. They can determine who the decision makers are, who would be open to connecting, who would potentially reply to your InMail, etc. Tilting the relevance ranking for their search engine to better fit the needs of sales teams provided LinkedIn with an entirely new product built on the thousands of little signals people give off while using the site.

It’s important to gain a clear and deep understanding of all the data regarding your product. Your product strategy needs to include data and how to unlock value from it for your customers. It’s quite common for organizations to lack the in-house talent to deal with data. Don’t worry. The future we are living in has an answer for that. Sites such as Kaggle allow anyone to sponsor a data science competition, where you make your data available and state the problem you’re trying to solve so thousands of really smart data science and machine learning experts can compete to find the best solution. The people on Kaggle have successfully cracked problems from a wide range of datasets covering everything from Enron email dumps to medical imaging and ad click optimization. Kaggle is just one example. There are many other options for leveraging the knowledge and skills of crowds to help with creating data, cleaning up your data, or using data science to derive insights from it.

Subscribe and stay connected to the cloud

All product names, logos, and brands are property of their respective owners. All company, product and service names used in this website are for identification purposes only. Use of these names, logos, and brands does not imply endorsement. The TC50 photograph above (center) is by Jen Consalvo.

© 2016 Culture Capital Corp.