The New Post-Covid19 World Order: Remote Work, Data, Cloud, AI, IoT, Governance, Autarky and Relevance

I’m almost certain that those of us reading this blog post have already experienced some of the disruption due to Covid-19 that’s been experienced at a huge scale across the world. The crisis that the world finds itself in as of this writing in April 2020, has brewed for almost six months. Going by recent research, the first cases of Covid19 were identified in Wuhan, China, around 17th November 2019. It has been a long and gruesome six months, marked by global disruption, tens of thousands of dead as on date, and more than a million and a half infected around the world. Most of us woke up each day of these few months to hear of more and more people affected and dead due to Covid19 in countries across the world. Some of us were not as fortunate – having experienced the disease or its effects first hand. In this post, I want to imagine what the world may look like – to use that oft-used expression these days – “when this is all over”.

Covid19: Health and Economic Impact

Not armed with a ready reckoner treatment of any kind because of the virus’ novelty, valuable time was lost in the initial weeks and the spread could not be curbed in Wuhan. The Chinese authorities in Wuhan as well as the WHO have both been blamed for the predicament the world finds itself in today – and perhaps rightly, although other governments of countries with significant numbers of cases are also to blame for their management of what was clearly known to be a highly infectious disease. There are experimental drugs and antivirals being tested, and a large number of people have recovered from such treatments – as of this writing, more than 400,000 people have recovered from this disease. However, the impact of this virus is likely to last a long time. It has been seen as a definitive point in history, marking the beginning of a new kind of social, economic and political world order, because the virus has far reaching consequences.

Annotation 2020-04-12 002730

Source: OurWorldInData.org

The US seems particularly badly hit as of this writing, with European countries such as Italy, Spain and the UK also badly affected. Some countries have had more luck than others in fighting Covid-19, South Korea being one example. In Asia, we’ve seen Iran affected quite badly by the disease, with tens of thousands of cases and thousands of deaths. China’s been reporting minuscule numbers since late February, and while we’re all led to believe that they’ve defeated the virus’ spread, in good faith, the numbers they’ve told the world earlier didn’t add up, with additional research on estimated actual death and case counts, from Covid-19 in China. In South Asia, we’re likely to see a rapid growth in cases, and I hope that in India we will manage to keep the infection and death rates down as far as possible. As of this writing India has more than 8000 confirmed cases, and has seen more than 200 deaths because of the novel coronavirus. In summary, it can be said that this was a bolt from the blue for the world, not only in terms of the impact on the health and medical systems around the world, but also economically.

Staring Down the Barrel of a Global Recession

In the first week of April, millions of people in America reported unemployment, a historic high of 6 million or so in that country. Chinese firms are going back to work in and around Wuhan after the lock down in that country was lifted. I am unsure what the future holds there – if some of the test accuracy rates from Chinese test kits are as low as claimed in some of the reports (30-40%), we are likely to see a relapse of the condition in many of those affected – and without practicing the social distancing and lock downs protocols that seem to be required to curtail the spread of this disease, we may see a resurgence in cases in China as we are seeing elsewhere. This can only be a bad thing for the world’s current economic condition. In fact, as of today, it has been declared that we’re in a global recession.

Politicians, policy makers and companies the world over have been pulled up for not acting fast enough, with even the WHO not being spared – their initial advice on not recommending masks is now widely seen as a problematic piece of advice which led to untold misery in countries like Italy and now in the US, because the advice contradicted the correct practice for curtailing the virus’ spread. Economically speaking, most economists and economic policy makers have indicated that the world economy is already in recession as of 11th April 2020, and that we’ve seen a significant erosion of value in all economies of world with the possible exception of China and India who may recover from this recession better than most. As a services oriented economy with only a satisfactory manufacturing base that’s underwhelming compared to the scale of manufacturing in China, it is hard to imagine India bouncing back strongly from this shock. China controls a lot of the supply chains of global manufacturers in a diverse variety of goods, and therefore have the potential to bounce back better than India on that count. India’s tech-savvy IT businesses and startups may buck the trend and do well, but sectors like agriculture and manufacturing will suffer because of the supply chains everywhere being hit. Even within Indian IT, though, demand is probably going to be hard to come by, and we may see very bad tidings for the Indian economy in general.

In India, where we’re seeing a large number of cases (nearly 7000 cases as of this writing, and almost 250 deaths), the lock down has resulted in a huge disruption, especially affecting the non-salaried class. India’s economy has a large unorganized sector, where artisan-ship, daily wage labor and other such occupations account for a large percentage of the workforce. These jobs are responsible for a vast amount of India’s economic leverage as well as provide a viable income for the masses that don’t possess advanced degrees and skill sets that go beyond the basic skills required for most jobs. With Covid-19 requiring social distancing, many who form part of the unskilled workforce may end up contributing to the supply chains that will run our socially distanced, remote workforce. Without this option, they’re likely to be significantly set back, economically. Already, we see how companies like BigBasket, Swiggy, Dunzo and other delivery-centric and supply-chain-centric firms (in India, and their equivalents elsewhere) are doing really well in this period of crisis and adding significant value to their customer base. We’ve also see how telecommunications firms have seen their value expand as a consequence of their increased demand at this time of crisis. And this is important in the long term for reasons that I will explain below.

Modes and Impact of Remote Work: Technology Sector and Other Sectors

With Covid-19 pushing organizations to follow lock down protocols and social distancing measures, plenty of Indian organizations (like elsewhere) have adopted remote work as a viable alternative to office-based employment. The important thing about this trend, however, isn’t the fact that the organizations have adopted the few changes necessary to enable remote work – it is the fact that the very nature of these organizations will change, thanks to Covid-19, even after this crisis has been relegated to the history books. Why is this the case? For one thing, operations managers and COOs will realize that remote work enables higher productivity and lower costs for knowledge work. They will understand the benefits of having employees manage their time at home, juggling responsibilities at home and work, while completing tasks and meetings required for achieving their goals. Remote working also obviates the need for large offices. The modern glass-paned concrete-jungle city is a consequence of an old school of thought – centralized, synchronized teams, communicating face-to-face and using this face time to build relationships.

Now, in the post-Covid-19 world, companies will have to amend their cultures and working styles to perform all of these functions – from sourcing to the delivery of value, to the collaboration required for sustainable organizations – completely online. This necessitates the use of telecommunications networks first and foremost, and on top of these networks, it necessitates the use of communications and collaboration technologies from audio and video conferencing and the like. As an example, if you’re a software developer, you may start your day with video calls, manage your features and tickets on a tool like JIRA, rely on documentation asynchronously developed by a global, distributed team, and manage your code on git with good engineering and code management practices. If you’re a manager, expect to jump on a number of video conferencing calls, and expect to build relationships remotely. If you’re an executive, you will have to cultivate the ability to write, inspire and influence your stakeholders and team across distances, with very little possibility of direct face-to-face interactions.

Manufacturing, energy sector and other organizations will rely on a combination of process automation for manually intensive tasks, implement a sanitized workplace for those who are required at site by default, and enable remote work options for knowledge workers in those industries so that they may collaborate and add value remotely. In such organizations that range in their scope from industrial equipment production to chemical and oil and gas supply, there is likely to be significant disruption of the standard work practices that were implemented and perfected over the years, because of the unique challenges faced by their employees and customers, in the post-Covid-19 world. Companies across industries will realize and come to value good measurement systems for processes at many layers of the enterprise (technical and process level metrics, and also functional and organizational level metrics), and strive to implement effective and reliable measurement and management systems, because their decisions will be asynchronous and decision making remote, and both processes will be based on such data. Metrics to manage and provide feedback on employee performance and reward or penalize such performance will have to take a different route from what was done in the time of face-to-face communication. Many organizations will have to adopt a process of management by metrics in addition to management by objectives, with old styles of direct and micromanagement going out of fashion.

Technologies such as augmented and virtual reality, which have become popular in recent years for things like product demos, entertainment, games, simulation and so forth, hold a great deal of promise for companies wanting to bring immersive collaborative experiences to their workforce. While a VR/AR meeting seems simplistic as a collaborative addition over the video conference experience, perhaps there are many opportunities for interaction possible in this ambit. Largely, the impersonal world of online conferences and meetings have seen an attention deficit problem, and low engagement. This seems to be true even for some video conference situations, where a lot of the interaction’s elements are voluntary. The subtle non-verbal cues that humans pick up and communicate with in face-to-face conversations play a big role in meetings and trust building, and this impacts credibility and consequently productivity. Naturally, face-to-face conversations set the bar for interpersonal communication far higher than virtual alernatives (text, audio, video and VR/AR), and there is a need for over-communication verbally and gestures-wise when you’re on calls of any nature. This has consequences for team management and dynamics, and can come to define the culture of the organization itself. Case in point: Basecamp and their CEO Jason Fried.

What we can perhaps hope to see by the fusion of data and ML/AI with conferencing, are listening and immersion aids and statistics. In some hypothetical future meeting, such immersion aids could improve the meeting experience. Given the direction that some of these innovations may take, there is an opportunity for new hardware to provide some of these immersive experiences for those at work in different settings – especially since for some of us, the immersion we experience at work is more like that of a musician playing an instrument and less like someone watching or enjoying a movie – there’s a visceral amount of immersion required for some tasks at work to be effective. Direct brain stimulation is the next step in communication beyond the audio-visual domain where we’ve been operating for all of humanity, and there’s work being done in this space. Some of these hardware are, if we are to go by recent advances in AR and VR, full of promise, given that they’re experimenting with experience-creating technologies such as direct brain stimulation (1, 2).

The Impact of Data and AI in the Post-Covid-19 Enterprise

Measurement systems and data will become increasingly more important for asynchronous, global and distributed enterprises. The cloud enables large scale data storage, in remote, managed services models. It also enables enterprises to convert capital expenditure considerations to operational expenditure, thereby providing them the flexibility to manage costs for teams, equipment and project funding separately from IT infrastructure. Serverless and cloud-based IT applications that are very contemporary at the moment will simplify nearly every aspect of the technology-enabled enterprise to sourcing, hiring, engineering, development and delivery, quality and customer experience management, and metrics will drive team performance, goals and agility in projects. For instance, there is no excuse for a modern enterprise (whether it is a startup or a truly large business) to not prefer the cloud for their website. Sure, they could maintain a backup server with their site, but it is a no-brainer to adopt cloud technologies for certain use cases – the risks and costs of starting from scratch don’t make for a good business case for most enterprises.

For cloud scale serverless architectures to be effective, they need economies of scale, among other constraints such as tooling and testing, on the adoption side of things. This is purely by design – cloud based serverless architectures are products rolled out by the big cloud firms, that depend on such scale to keep the costs low. Security and scalability issues currently persist but are far less frequent than those with on-premise infrastructure. One hopes that with the tailwind strengthened by Covid-19 related pressure, many companies seeking to go cloud-native instead of building their own IT infrastructure will use these capabilities going forward.

Companies outside of technology, across the spectrum from manufacturing, retail and telecommunications to oil and gas and energy will likely use the cloud a great deal in the future. Whether Covid-19 or not, many had already begun this journey. As a consultant working with clients on big data, data science and such initiatives, I’ve seen many taking new technologies on to ensure that they are able to stay ahead of the game (by gaining competitive advantage) and cost-effectively so. Manufacturing organizations can do more on the cloud than they thought possible today. Network speeds are fast enough for thin-client CAD applications that have high responsiveness, for example, and cloud-based servers could be used to run analyses such as finite element or CFD computations, that may be required in large scale manufacturing settings. The virtualization and digitization capabilities that the cloud brings in general, therefore, can cut team sizes significantly and manage aspects such as costs and consumption by moving to a pay-per-need model. Such an economic model can greatly benefit manufacturers in developing economies, if the benefits of scale elsewhere are made available to them.

Data, Cloud and Digital Transformation: Pre- and Post-Covid-19

Collecting and managing data from diverse source systems has been one of the many victories of data ingestion tools that have come into prominence and widespread use in the last decade. These ingestion tools, along with scalable data storage and processing systems that use distributed computing, have become the staple of big data and cloud-based AI/ML initiatives for numerous enterprises. I’ve written about these capabilities extensively on my blog earlier, as building blocks of enterprise scale data lakes and AI/ML systems. Such tools will come to see greater relevance and importance in the digitized enterprise transformed by Covid-19 risks.

With the need for large scale analytics and insights to drive efficient decision making in a remote work setting, where the individual is far removed from the business process, there is likely to be a greater demand on suppliers of the data for such insights, and for those who can deliver the insights themselves. This will in turn necessitate machine learning and data science – and overall, this paradigm is not unlike what we have seen earlier. The drivers for the earlier data and ML / AI revolution were competitive advantage, data driven decision making to achieve the upsides in transactions, and the need for low cost, agile, and lean operations. Now, however, Covid-19 related risks have resulted in a completely different set of motivations for digital transformation. For one thing, enterprises with high structural costs of business are now using Covid-19-induced drop in demand as a rationale for restructuring and reinvigorating lean and agility initiatives, by adopting remote work, contract employees, and distributed teams to save costs. In the short term, these trends will result in reduced operating expenses for existing facilities, and in the long term, this will transform into reduced capital expenses and reduced investment in new facilities. Additionally, data and AI adoption will grow for the reasons mentioned above – greater adoption of automation, cognitive/smart automation driven by machine learning, and productivity drivers will enable new kinds of value delivery in the enterprise.

New AI Capabilities and their Adoption

As a practitioner in and an observer of the AI and machine learning space, I find a number of new techniques in the spaces of natural language processing and generative modeling that have become research frontiers for those in machine learning today. Many of these techniques, from transformer models for natural language like BERT (link, explanation), to generative adversarial networks or GANs, have been experimented upon for a wide range of applications ranging from language translation, to face generation. With the rise of remote work and remote teams, there are many upsides to adopting such techniques. The contexts and problem statements around the use of machine learning in the post-Covid-19 world are still being revealed, and many enterprises are discovering such points of value, but the need, in this time of distributed teams for cross-cultural and cross-language communication, digital team building, real-time translation – all while preserving the personal touch – these things are important for effective remote work across distances, regions and time zones. These capabilities, along with virtual avatars for bots and virtual intelligent agents are just some use cases will see enterprise AI adoption (especially of the ML methods for richer data such as text, audio and video) at large scale.

There is another, underlying layer of technologies that will enable collaboration in the post-Covid-19 world – that of the telecommunications network. Large scale data transmission has become nearly ubiquitous now, with fiber optic technology becoming mainstream in the past decade. The coming years, due to Covid-19 and the risks that will follow it, will seed a reliance on the part of businesses on higher speed, near-real-time interactions, that enable complex automation tasks, such as completely remotely executed surgeries. While there is no substitute for a direct, in-person diagnosis and surgery for a lot of patients, for many surgeries, there is a gap between the expertise available and the need of patients around the world, and robotic surgery tools could be the frontline equipment in these battles. The enabler of such technologies is 5G communications technology, which is in turn comprised of a number of enabling capabilities, such as virtualized, software-defined networks. Physical hardware (copper, optic fibre and the like) and the network connectivity we get from this has driven us to the large scale, high speed direct-to-home fiber internet revolution today, but in future, virtualized networks of all kinds that rely on such physical networks such as optic fibre networks will play an important role for the transmission of voice, video and sensor data. The management of these networks and their performance as regards their scalability, security and capacity management could become entirely automated, using machine learning techniques. These are already problems being solved by the big telecommunications technology firms of the world, who are deploying scalable networks defined using software.

Virtualization and container-based environments for running AI and ML applications have become an important capability in 2019 and 2020, and we have seen large scale acceleration of machine learning deployment using container development and orchestration/management frameworks such as Docker and Kubernetes. We’ve also see the development of purpose-built machine learning deployment frameworks such as MLFlow. These capabilities, now being considered a new area of data and AI capabilities termed as Machine Learning Ops, are more likely to be taken up by organizations that are already using machine learning beyond the stage of prototypes and proof of concept activities. Mainstream technology firms and firms in manufacturing, energy and retail sectors may find less direct use for these technologies in the immediate future, unless they’re already building machine learning at scale. Containerized and similarly managed machine learning applications are important in the context of organizational agility to deploy ML capabilities to production and to have fast responses to production-scale ML model performance issues, such as model drift. Further discussion on this topic will be in a future post, since it gets a bit technical from this point on.

Sensors as our Saviours: Measured Surroundings

It goes without saying that Covid-19 has put a lot of focus back on China – the nation where it all started. From examination of the conditions that could have led to the cross-species transfer of the virus from bats to humans via pangolins, to broader examinations of the policy impact and the impact of social and cultural norms of Chinese food habits – there has been a lot written on the subject. It remains though, that this is an exceptional or rare event. Short of calling out the diets and social habits of the Chinese broadly, any root cause analysis that is scientifically minded has to start with the underlying conditions that lead to such transmissions, and that is perhaps a pathologist’s or an epidemiologist’s project.

Beyond simplistic monitoring of the conditions for the formation and transmission of such diseases, there are other direct applications for sensor based systems that can monitor and track environments where humans work – some of these measures over time could improve sanitation processes, especially in high risk zones that have historically been prone to infection.

Those in the IoT space should probably note the extensive need for no-touch systems, which we are all in need of due to this pandemic. For one thing – a lot of objects in our surroundings and a lot of household and public use items we need for daily life require direct physical contact – this repertoire of devices ranges from the ordinary smartphone or tablet screen, all the way to the simple devices which power our homes such as switches, door knobs, taps and faucets. It is clear that there is a new design philosophy that can benefit us all in such times. Providing smart sensor based systems that can open doors automatically, dispense soap automatically, or otherwise sanitize bathrooms and other public spaces, could be a shot in the arm. While these systems aren’t exactly breakthrough innovation for most companies, their widespread adoption hasn’t happened because of the relatively low cost of alternatives, and the high cost of adoption and maintenance. Once this entry barrier is broken either by governmental mandates and policies, or by increased public awareness, large scale IoT solutions like this, could take on additional veneers of sophistication – ranging from gesture recognition to automated sickness detection, automated reporting of sick/needy people in public spaces, for instance, or in sophisticated cases, automated interventions.

Sensors as our Saviours: The Measured Human

Another important theme related to Data and AI in the post Covid-19 world is one stemming directly from the sensor technologies that are mature today, but have not been adopted at large scale for reasons of ethics, cost and other considerations. The instrumented human, or the measured human is a concept that is at once both interesting and fraught with danger, probably ultimately because we each have a deep seated fear of being far simpler than we really are. More accurately, we are afraid of being manipulated by those with data on us. My own contention is that this is not just plausible in the post-Covid-19 world, but that it is a strong possibility. Let me explain.

Social media is an extraneous barometer that provides a periscope into the individual for the powers that be, while the sensors of the future that are embedded in our bodies, could become internal barometers of health. Today, we see people sounding off on social media (me included) on issues that affect us – and these messages are a representation of our thoughts and feelings not only on subjects at hand, but also are indications of our propensities and proclivities towards completely oblique issues from those that we’ve expressed ourselves on. In a sense, we’re putting ourselves out there everyday, and for no good reason. That data has been weaponized before. We have seen the repeated use by politicians, media houses and technology firms of the data we volunteer or otherwise allow them to collect, to manipulate us into buying new products, or clicking on ads (if anyone does indeed click on ads anymore), and even vote for this or the other political entity. In the age of the measured human, where we may see sensors measuring everything from our body temperatures (at different locations on our bodies) and our blood pH, to antibody and platelet counts in our blood, and so forth.

When we have this wealth of information available to an algorithm, leave alone a doctor, we could identify the propensity for specific conditions, and administer preventive medicine. Equally, such data could be misused in negative ways, just as personal data today is used to exclude individuals from opportunities and from credit. For example, data about personal medical metrics could be misused by health insurance providers, especially in cases where applicants may have pre-existing conditions. There are no technological solutions to such sub-problems, however, and the solutions are likely to come from good processes and a reflective, empathetic design process for these systems, rather than one which prizes the short term gains to the insurer or other enterprise in question.

The Short Term: Innovation vis-a-vis Big Government and Coverups

The Covid-19 crisis has revealed two sides and two scales of the global community’s response to the problem. One of these sides is the innovative side, as depicted by Italian doctors who repurposed scuba diving gear to treat patients in the face of equipment shortages for ventilators. The other side of this tragedy is the massive coverup – which we are nevertheless told never happened. One of these – the innovative side – has been more prominent in individual responses and community responses to Covid-19, whereas the other, more pernicious side of the global community’s response has been seen more often in big tech and big government.

It has been easy (and rather cheap) to speculate on the innovations that could solve some problems we face in the Covid-19 world. Here’s one interesting example of a thought/ideation experiment around masks. Masks have become a contentious topic both for the WHO who bungled their advice to the world at large and as a source of the next wave of tech innovations. This is one of the guys I have really come to respect from his Twitter posts on Covid19, Balaji Srinivasan:

Closer to home in Bangalore, India, there are startups coming up with sanitization equipment such as this Corona Oven, that enable a wide range of accessories and objects to be sanitized:

Product innovations like these will solve some of the problems we may face in the post-Covid-19 world. They may help us adjust to the new rhythms of life and work, and enable us to get the bottom layers of Maslow’s hierarchy out of the way, by enabling us to manage the food, shelter, security and safety of ourselves and our families. They also provide the opportunity to add new product features on top of them. Kano’s model of product evolution comes to mind, and is relevant. Just as the smartphone evolved from a pureplay voice communication device to an avenue for media consumption, and became a platform for new value, prosthetic devices such as masks could enable us in new and unforeseen ways – and it wouldn’t have been possible without the specific adversity this crisis brought to industrial design and engineering teams.

From the frontiers of machine learning, numerous innovations in computer vision have been brought to bear on Covid-19 X-ray and other data, to detect and prevent certain conditions that arise during the respiratory illness that Covid-19patients experience. Some of the other techniques rely on proxies to enable prevention, as seen in this research below:

The first response of China has been to cover up the scale of their Covid-19 infections and under-report the number of deaths from Covid-19. A casual glance at the Covid-19 curves for China vis-a-vis other nations that suffer from this crisis makes one wonder whether we’re seeing reliable numbers here. Many have spoken about the impact of Covid19 on governments – the act of strengthening governments by centralizing policy can rarely happen in democracies. One reason for this is the stabilizing power of vocal opposition parties in democracies. With the Covid-19 shock, the government’s instruments will be brought to bear on curbing personal freedoms where that may be a requirement, to prevent cascading infections. We’ve seen steps taken by nearly all nations around the world to curb freedoms, impose lockdowns, and as far as this is done in good spirit and with an intent to return the state to normalcy, we can come out of Covid19 with rights and freedoms intact. In the case of some countries though, this doesn’t apply – China being a key example here, because by definition the ruling party limits the freedoms of people and essentially fields the biggest army in existence for any political party on earth. Generally speaking, the coverup that China managed could not have been managed in any other country in existence. This in itself was a potent vector for transmission of Covid19, especially given China’s important place as the world’s manufacturing powerhouse. Which brings me to the next disruption from Covid19 to the world’s economic system: the world’s supply chain and manufacturing industries.

More generally, popular world leaders have talked about minimizing government footprint for decades in the post-Cold-War era. In recent years, we’ve heard slogans such as “Government has no business being in business” and “Minimum government, maximum governance” in the context of government intervention in industry and value delivery to consumers nation-wide or even across nations. The Covid-19 pandemic and crisis have ignited an interesting dichotomy in government and politics – should governments run institutions such as hospitals and healthcare mandatorily? If the answer to this is not an absolute no, to what extent should they? Perhaps some elements of a response to this question has to be how big the nation is, and how many caregivers are required, and what means the government has to ensure quality of service to patients despite operating at large scale – and these are relevant questions in the age of Covid-19. Generally (and one could say historically) crisis times (such as times of war, famine or epidemics) embolden governments to take strong countermeasures and revoke freedoms while enabling government officials to move faster. With the scale of democracy we have in large countries like the US and India, we probably need big government to enable the leaders to serve the greater good. The other side of the coin of course is the fact that very powerful governments don’t tend to part with that power easily, and excessive power concentration is a slippery slope leading to further mismanagement of countries.

Data and AI can be exceptional tools to enable data-driven governance. In a sense, if we are to look beyond the normal tendency to extend control from the government to the grass roots and lock things down from the top down, we could enable citizens to take informed decisions, by educating them about phenomena that could affect them, the consequences to them and to society at large, and then implore right action from them. Transparency is, in other words, the weapon of strong and sustainable data-driven democracies, because such democracies rely on the facts and information to take decisions, and not based on presumptions of behavior.

The onus for such dissemination of data to enable data-driven governance should fall squarely on governments. Governments often put the onus of interpreting and transmitting vital information on the media – and this model is fraught with problems. From the sensationalist news stories to erroneous reporting to putting important stories behind paywalls and “cookie clutter” screens, the world of internet news reporting is an unmitigated mess that’s accelerating towards becoming a train wreck of a disaster for the consumer. It didn’t have to be this way. Platforms like Twitter and Facebook have been accused of fomenting unrest, of political and other kinds of bias, and despite these reputations, they’re platforms on which important news has to be disseminated and consumed. These platforms are also simplistic and don’t lend themselves well to data-driven journalism. This isn’t a business or data/AI problem so much as a policy problem. If access to the internet is increasingly more important, access to authentic information on it should also be so, and the post-Covid-19 world, especially given the excesses of various world governments and media houses, will likely see a metamorphosis of the status quo.

Additional consequences of the Covid-19 crisis, is to accelerate the adoption of electronic payments the world over, enabled again by telecommunications, and perhaps the growth and acceleration of blockchain technologies for veracity in news and transaction reporting.

Supply Chains and Manufacturing: Globalization to Autarky?

The proverbial elephant in the room for manufacturers around the world in the post-Covid-19 world is the global supply chain, specifically how fragile their businesses have become due to over-reliance on China and the goods and components it produces for the world. From cheap toys and car parts to computer chips and smartphone screens, there are few things China is incapable of producing at large scale today, and this excess concentration of supplier bargaining power (to use a phrase from Michael E. Porter), is purely due to the perils of excessive capitalism. I say this as a bit of a capitalist myself – after all, anyone who has benefited from India’s economic liberalization is a bit of a capitalist. What is more important than just the fact that this is a case of capitalism’s excess, is that the global strategy for sourcing our supply chains across manufacturing industries has followed a groupthink, and a daftly simplistic and unstrategic winner-takes-all effect followed. In other words, it isn’t capitalism itself, but the limited set of strategic sourcing options that the West, which has controlled the world economy for decades, has had.

So, in the post-Covid-19 world, what sourcing options do we have? We risk continuing supply chains to run out of China for reasons of continued concentration of power with their firms, and for reasons of political and economic leverage. China’s one-belt-one-road initiative and the encirclement strategies they’ve used in the context of the South China Sea and elsewhere leave little doubt about that nation’s interest in protecting its strategic assets. On the other hand, in the US and in Europe, you have an ageing population and economies that have become unsustainable for taking on low cost manufacturing jobs. In the Middle East, we see spots of bother thanks to the geopolitical and geostrategic situation there, an overreliance on oil for energy and economics and the lack of skilled engineering and manufacturing talent, not to mention issues such as the impact of religious fundamentalism in societies across the Middle East. India and some of the ASEAN nations are relative bright spots. In addition to these, many eastern nations – excluding Taiwan, Korea and other rich Asian tigers – are probably the best places for maintaining competitive advantage in sourcing. We have already seen some countries incentivizing their companies to move out of China, and to other countries, and we will see this sourcing game continue.

However, even this approach only seeks to prolong what many have been calling the Old World Order of Globalization.  The new era, they claim, will see autarky to a great degree, where in-sourcing or domestic sourcing, and self-reliance will be the order of the day, where boundaries will be drawn again and nations will be closed off from others for years if not decades. Friends and enemies in this new world order will perhaps be long term relationships geostrategically, and the world order we see now, that is unable to solve every man’s ordinary problems (affordable healthcare, for one). Even in this world, one can foresee trade being an alternative for countries without the wherewithal to become autarkic owing to resources. The success of countries in the Middle East, for example, is due largely to their oil exports. In an autarkic world, the transitions made to today’s automotive sector will tend towards electrification, one hopes, which means overreliance on indigenously produced energy in each country, and under-reliance on sources such as oil from the Middle East. However, is this idea of pure autarky a step too far? Perhaps.

Data and AI capabilities are just being explored in the context of global supply chains. From older systems such as bar code scanners and object counters that track objects on conveyors, to modern ones such as computer vision and prescriptive analytics for on-time supply and demand matching in large supply chains, from voice-based ordering systems to no-waiting check out counters companies like Amazon and Walmart have adopted data and machine learning at scale and are putting together compelling examples of how to run much larger supply chains at global scale. Some of these technologies will fare them well in the post-Covid-19 world, although one can imagine a number of products in the post-Covid-19 world being sourced to these e-commerce platforms from places other than China – for whatever reasons. However, I foresee that these large digitized, high tech supply chains will be important even in the post-Covid19 world. American autarky, in other words, seems a distant dream, or more accurately, a lost utopia.

Environmentally Conscious Business in the Post-Covid-19 World

It isn’t an exaggeration to say that the economics of Covid-19 are being stressed more than the root cause of the problem – that of infectious diseases and how they spread. A large part of the reason why SARS, MERS and now Covid-19 have spread, is because of the conditions and policies in China and elsewhere that allowed it. Specifically, Chinese policies on wet markets and their breeding of Chinese wildlife for human consumption has been one of the contentious underlying topics of discussion.

More broadly, it indicates the importance of environmentally conscious business practices and their importance. Far too often, we settle for what is pragmatic and benefits humans, and don’t emphasize the impact to the environment at large from our actions in business. This may seem like a simplistic complaint, but in fact it is a deep and important one. The depth comes from the fact that the enterprise and the businesses we run are but one set of processes in a broad chain of environmental processes that sustain the planet. When we have simplistic policies concerning complex systems, we risk, in the words of Nassim Taleb, naive interventionism (iatrogenics), where we are unsure what true consequences our actions can have, even if we execute those actions “in good faith” or “to the best of our knowledge”. The cycle of value from the animals in question – bats and pangolins – is vast. They are part of an ecological system of balance where upon consuming lower forms of life that they prey upon, these animals enable the sustenance of a balance. When that balance is upset by either close proximity of species that rarely meet in the wild, or by selective breeding of these species, or by other means, there are likely to be shocks to that ecological balance from which we may never fully recover.

Learning and Staying Relevant in the Post-Covid-19 World

For me personally, the last several years have revealed the power of the internet to educate. From short courses on various topics in data science, machine learning and AI to extensive courses such as in a post-graduate diploma – all of these seem to be worked out for large scale skill development online. On the internet today, there are numerous free resources especially for those in technology, computer science, software, data and analytics – these areas of contemporary advances and cutting edge research see a surfeit of information and content which is near-well an embarrassment of riches – an this is all good until we see a gap between supply and demand. Numerous learning opportunities have been opened up specifically post-Covid-19 . Google, EdX and Coursera all announced a number of new courses, some of them free. If you know where to look you can find incredible content on the internet to teach you nearly anything in machine learning from the basics to the latest algorithms and research.

But here’s the thing – in reality, there is a supply vs. demand gap in online learning. Specifically, there is a great deal of supply of content and courses in a few areas of technology, science and engineering, and largely nothing in other areas. There is research hidden behind paywalls in important areas such as epidemiology, which is a core research domain as regards the Covid-19 crisis. This huge disparity is also a problem of curation, of practicality in the dissemination of certain subjects, and of economies of scale.

The internet as a medium is not best suited for teaching certain skills – sitting down in front of a computer is not the best way to learn how to turn a component on a lathe in a machine shop, or how to play a guitar (although you could argue about the latter, as I myself have learnt a lot of guitar just by looking things up online and from practice from self-study). The limitations of this medium in disseminating certain kinds of knowledge is well known and well attested and yet there are attempts to move entire courses, and even masters degrees online. While initially this was seen questionably, in the post-Covid-19 world, we can assume that such online learning will gain momentum – and if my experiences have taught me anything, it is that with the right tools and interactions, you can learn a surprising lot online.

Staying relevant in the post-Covid-19 world is a harder task than just learning in this increasingly socially isolated and digitized world. Learning is just the acquisition of skill, whereas relevance is a consequence of being the right person, in the right environment. The latter is therefore equally contingent on skills and on our own explorations of and (entrepreneurial or other) responses the conditions we’re in – and this consequently influences how we use these skills that we have acquired. For instance, we know that in the post-Covid-19 world, there are likely to be sea changes in which industries are relevant and which ones aren’t. Putting our feet in the right places, and bringing value to these new interactions that we become part of, can make all the difference between whether we’re relevant in the future and creating some of the history, or whether we’re just another casualty of history.

Concluding Remarks

The world post-Covid-19 is a time of change, indicating a complex, new reality. There are economic shocks will will impact us for years, if not decades to come. We’re in a place of incredible opportunity vis-a-vis a position that poses incredible challenges as well. Enterprises as we knew them will change forever, adopting new styles of work and learning, and professionals will awaken to a new age of online learning and a protracted search for relevance and professional meaning in some cases. Smart governments will adopt data and communicate and govern based on facts – even as others will use these opportunities to grow large scale government influence; indeed, questions of governmental oversight on essential services including public health will be debated for years to come. Data and AI adoption in enterprises will accelerate in enterprises and will enable new kinds of collaboration and remote work, required for these months and perhaps years of social distancing and isolation. Enterprises will accelerate their move to the cloud, benefiting from large scale and low cost services for data, web, and other technologies. Emerging technologies such as augmented and virtual reality may become a staple of our boardrooms and classrooms. More and more learners will try and adapt to online learning, and more teachers and professors will be compelled to learn to teach on this medium, even as new technology interventions will improve learning experiences. As many governments around the world will rush to build self-reliance and their respective versions of autarky on many essential manufactured products, the global supply chain will start looking different, and we may see the greater infusion of data and AI technologies in the businesses that control our supply chains and logistics. We may see the growth of blockchain and other trust-centric technologies, for applications in medicine and the news, in addition to finance where it finds its most common use cases. The post-Covid19 world is a clarion call to problem solvers and innovators of all kinds, as much as it is for those in policy and governance, public health and medicine. The world order has been upset, and the new world order that will manifest after this pandemic is behind us, will look to the resourceful and the inventive even as people look towards being part of sustainable, healthy and safe work and living environments in the future.

 

Questions that Data Scientists Hate Getting

This is a variation on a Quora answer.

When asked how data scientists can be effective, there are a few things that com e to mind:

  1. Skills: A curiosity and sufficient skill in data analysis methods and techniques
  2. Fundamental needs: the data and access to the tools to perform analysis — and this would include the environments
  3. Performance needs: Sufficient resources, time and good enough processes to validate or invalidate hypotheses and build models based on them
  4. Excitement needs: Sufficient support and latitude to independently deploy projects based on successful hypotheses tested and models built

Note that while these criteria listed above begin with the fundamental skills required to do data science, the focus shifts in items 2, 3 and 4, to what is required for data scientists to be effective. The first of these are the fundamental needs, such as the data itself, and the access to the required tools, be they statistical or machine learning tools, databases, visualization libraries, or other resources. The second of these are the performance needs, which will help the data scientist do whatever it is that they do, a bit better than how they’re doing this now. This includes processes and systems that enable the data scientist to improve their own capabilities. Finally, we have excitement needs, which enable data scientists to do outstanding work — a large part of this is being able to reuse what has been built, through deployment of various kinds.

It is in this context that we can discuss how managers of data science teams can help them be effective.

If there is one kind of behaviour in analytics managers that I wish changed, it is the one I describe in the following lines.

A lot of what data scientists do is experimental, throw-away analysis. However, it is tempting for a number of managers (many of who have made up their minds that some hypothesis holds true, or will work), to assume that they’re right, and what is required from the data scientist is the detailed model that formalizes the relationship.

This kind of assumption makes for poorly designed projects, and doesn’t amply use the data scientist’s time for exploratory analysis, for evaluating the development of different kinds of models, and for finding out what works, given the dataset.

Naturally, given the time-bound nature of businesses and poor understanding of analytics at the executive level in many organizations, such clients are commonplace, and such managers also find themselves in a situation where they push for results without the right underlying systems, data or resources. Sometimes, they begin projects with data scientists who lack the specific skills to build the kinds of models required to solve problems. While this may be the case, the challenge many data scientists in business and consulting have is dealing with such unreasonable expectations.

In this specific context, some questions that shouldn’t be posed to data scientists might be along the following lines:

  • “Assuming that hypothesis X works, how long would it take to build a full fledged application using this hypothesis X?”
  • “The domain experts are convinced that this hypothesis X is true. Why don’t your results reflect this too?”
  • “The values of R_sq or precision/recall I see here don’t reflect what can be done with the data. Aren’t better results possible?”

These kinds of questions are simplistic when in the initial stages of a data science activity/experiment, and in some situations, they could be dangerous too (although they’re innocuous mistakes any manager new to analytics initiatives may make).

For the same reason that “a little knowledge is a dangerous thing” these project managers might be playing with the fortune of the entire analytics program they serve, because they base even large projects on such naive and unverified assumptions. Were they to change their behaviour by giving due consideration to exploratory data analysis, and what the data actually says about viable models and applications that may be built, they might be putting their data scientists and engineers on the path to success.

Pragmatic Business Transformation with AI

I interact with numerous data scientists and people in the data science space on LinkedIn on a daily basis. Many of these have insightful things to say, about how data and artificial intelligence are transforming the business landscape. There is a certain alarmism in the context of the automation of business processes, that accompanies every discussion on artificial intelligence, and with good reason. One of these is Vin Vashishta, whose posts often address pressing challenges in data and AI. Here is a recent post by Vin and my comment. This blog post was originally on Medium, and is an expansion of the ideas represented by the comment.

Traditional Thinking Couches

Traditional thinking about how work gets done, in general has the following elements. Traditional work and time based thinking is based on scientific reductionism and paradigms such as linearity. In truth, this thinking has allowed us to come very far. The division of labour is the very basis of capitalism, for instance, and modern capitalism thrives on specialization and the management of work in this form.

  1. Linearity: The tendency to think of all work as ultimately reducible into linearly scalable chunks. Less of a task requires less resources, whereas more work requires more resources. To be fair, this kind of thinking has been around for millennia, since at least the time of human settlement and the neolithic age.
  2. Reducibility: This is a tendency to think of work as infinitely reducible, in such a way that if we complete each sub-task of a job in a certain sequence, we have the end result of completing the whole job. Systems engineers know better, and understand holism and reductionism in systems as analogies to the traditional view of reducibility and how it might affect the way we see work today
  3. Value-based Work and Tangibility: Another element of what seems to define work traditionally is the presence of tangible objectives, such as items shipped, or certain unambiguously measurable criteria met. In this world, giving a customer a good experience when they shop, or enabling customers or partners to better be served or serve us better, aren’t seen as value, but as non-value-added activities. For a long time, approaches to business transformation focused on the reduction of non-value-add activities from business process, with the view that this will improve process efficiency.

When we think about how businesses will take up AI and machine learning capabilities, we’re compelled to think in terms of the same above lenses. They’re comfortable couches that we cannot get out of, and as a result, possess and dominate our thinking about AI deployment in enterprises.

AI-Specific Cognitive Biases

Some dangers of thinking driven by the above principles are as follows:

  1. Zero-sum automation: The belief that there is a fixed pie of opportunity, and that when we give human jobs to machines, we deprive humans of opportunities. Naturally, this is not true, because general, self-organizing intelligences such as humans are more than capable of discovering and finding new opportunities. Fixed-pie thinking is probably one of the key reasons behind AI alarmism. I would additionally argue that at some level, AI alarmism is also the result of bogeyman thinking, a paradigm in which a strawman such as AI is assigned blame for large scale change. In the past, a lot technological progress and change happened without such bogeymen, even as other changes were being prevented because of such thinking. Another element of bogeyman thinking is the tendency to ignore complementarity, including situations where humans and AI tools could work alongside each other, resulting in higher process effectiveness.
  2. Value bias: While there is truth to the notion that processes have value-add steps and non-value-add steps, it is a feature typical of reductionism to assume that we don’t need the non-value-add steps at all, while they may be serving true purpose. For instance, all manufacturing processes that transform raw material to product have ended up requiring quality checks and assurance. As a feature of the evolution of industrial production processes, quality assurance and control have become part of nearly all manufacturing processes that operate at scale. QA and QC represent a non-linearity in the production system, or a feedback loop which provides downstream process performance information to upstream processes.
  3. Exclusivity: A flip side of bogeyman thinking, combined with value bias, is the phenomenon of exclusivity. For example, the interpretation of emotional expressions on a human face, has for long been a task that humans are great at — for long, we didn’t know of any higher animals, let alone technologies, that had this level of sophistication. Now, there’s a lot going on in the ML/AI space that has to do with the so-called soft aspects of human life — judging people’s expressions and understanding them, learning about their behavioural patterns, etc., and these capabilities are becoming more and more mature within AI systems on a regular basis. This contradicts traditional notions of human-exclusive capabilities in many areas. Naturally, this is seen as a threat, rather than a capability enhancer. The truth is that exclusivity is also to be considered a logical fallacy when discussing the development of AI systems.

It is common for one to fear he who seems to do everything that one can do, until that person becomes one’s friend. I’d say that the word is still out on what AI cannot do yet — and as a result, our approach to business transformation (as with transformation in other areas) should be humans + AI, and not AI in lieu of humans. This synergy is already visible in the manufacturing world, and perhaps we will see it make its way to other spheres as well. Fixed-pie thinking won’t get us anywhere when we have capability amplifiers like AI to assist humans.

Concluding Remarks

A key element of future human productivity is the discovery and exploitation of new opportunities in new frontiers. My suggestion to business leaders thinking about AI adoption for automation and process improvement, is to expand the pie first, by creating new opportunities to do more as a business, and enable your employees to take up and contribute more to your business. When you then enable them with AI, the humans+AI combination you will see as a result will take your organization to new heights.

Big Data: Size and Velocity

One of the changes envisioned in the big data space is that there is the need to receive data that isn’t so much big in volume, as big in relevance. Perhaps this is a crucial distinction to make. Here, we examine business manifestations of relevant data, as opposed to just large volumes of data.

What Managers Want From Data

It is easier to ask the question “what do you want from your data” to managers and executives, than to answer it as one. As someone who has worked in Fortune 500s with teams that use data to make decisions, I’d like to share some insight into this:

  1. Managers don’t necessarily want to see data even if they talk about wanting to use data for decision making. They instead want to see interpretations of data that helps them make up their minds and take decisions.
  2. Decision making is not monotonous or based on single variables of interest.
  3. Decision making involves not only operational data descriptors (which are most often instrumented for collection from data sources)
  4. Decisions can be taken based on uncertain estimates in some cases, but many situations do require accurate estimates of results to drive decision making

From Data To Insight

The process of getting from data to insight isn’t linear. It involves exploration, and this means collecting more data, and iterating on one’s results and hypotheses. Broadly, the process of getting insights from data may involve data preparation and analysis as intermediate stages between data collection and the generation of insight. This doesn’t mean that the data scientist’s job is done once the insights are generated. There is a need to collect more data and refine the models we’ve built, and construct better views of the problem landscape.

Data Quality

A large percentage of the data analyst’s or data scientist’s problems have to do with the broad area of data quality, and its readiness for analysis. Specifically to data quality, some things stand out:

  1. Measurement aspects – whether the measured data really represents the state of the variable which was measured. This in turn involves other aspects such as linearity, stability, bias, range, sensitivity and other parameters of the measurement system
  2. Latency aspects – whether the measured data in time sequence is recorded and logged in the correct sequence and at the right intervals
  3. Missing and anomalous values – these are missing or anomalous readings/data records, as opposed to anomalous behaviour, which is a whole other subject.

Fast Data Processing

Speed is an essential element in the data scientist’s effectiveness. The speed of decisions is the speed of the slowest link in the chain. Traditionally, this slowest link has been the collection of the data itself. Data processing frameworks have improved by leaps and bounds in the recent past, with frameworks like Apache Spark leading the charge. However, this is changing, with sensors in IOT settings delivering huge data sets and massive streams of data in themselves. In such a situation, the dearth of time is not in the acquisition of data itself. Indeed, the availability of massive data lakes with lots of data on them itself signals the need for more and more data scientists, who can analyse this data and arrive at insights from the data. It is in this context that the rapid creation of models, analysis of insights from data, and the creation of meta-algorithms that do such work is valuable.

Where Size Does Matter

There are some problems which do require very large data sets. There are many examples of systems that gain effectiveness only with scale. One such example is the commonly found collaborative filtering recommendation engine, used everywhere in e-commerce and related industries. Size does matter for these data sets. Small data sets are prone to truth inflation and poor analysis results from poor data collection. In such cases, there is no respite other than to ensure we collect, store and analyze large data sets.

Volume or Relevance in Data

Now we come to the dichotomy we set out to resolve – whether volume is more important in data sets, or whether relevance is. Relevant data simply meets a number of the criteria listed above, whereas data that’s measured up purely in volume (petabytes or exabytes) doesn’t give us an idea of the quality and its use for real data analysis.

Volume and Relevance in Data

We now look at whether volume itself may become part of what makes the data relevant for analysis. Unsurprisingly, for some applications such as neural network training, data science on time series data sets of high frequency, etc., data volume is undeniably useful. More data in these cases implies that more can be done with the model, that more partitions or subsets of the data can be taken, and that more theories can be tested out on different representative sample sets of the data.

The Big-Three Vs

So, where does this leave us with respect to our understanding of what makes Big Data, Big Data? We’ve seen the popular trope that Big Data is data that exhibits volume, velocity and variety. Some discuss a fourth characteristic – the veracity of the data. Overall, the availability of relevant data in sufficient volumes should be able to address the needs of the data scientist for model building and for data exploration. The question of variety still remains, and as data profiling approaches mature, data wrangling will advance to a point where this variety isn’t a trouble, and is a genuine asset. However, the volume and velocity points are definitely scoped for a trade off, in a large percentage of the cases. For most model building activities, such as linear regression models, or classification models where we know the characteristics or behavior of the data set, so-called “small data” is sufficient, as long as the data set is representative and relevant.

Data Science: Beyond the Hype

While there is justifiable excitement in the technology industry (and other industries) these days on the widespread availability of data, and the availability of algorithms to process and make sense of this data, I sincerely think (like many others) that the hype behind Big Data is somewhat unfounded.

For many decades, “small data” have been studied in science and industry with the intent of constructing mathematical models, i.e., approximate, error-prone mathematical representations of phenomena. In some ways, the scientific method is all about such data analysis. We often hear in the news about the amplification of effects, the “truth inflation” observed when drawing conclusions from small data sets, to make broader generalizations. We hear about the lack of enough data impeding the progress of research, we also hear about fabricated data and spurious research results. A lot of scientific findings have come under scrutiny for these reason – and perhaps analysis of population data (as Big Data promises to do) may help this situation. However, the key difference between the past decades of statistics – from legends such as Fisher and George Box, to present day stalwarts in applied statistics and machine learning like Nate Silver, Sebastian Thrun and Andrew Ng, is the ability to leverage computing to analyse large data sets.

A lot of the discussion around Big Data seems to be on the so-called four Vs of Big Data – volume, velocity, variety – and increasingly, veracity – referring to the increasing speed and range of data generated in the information age. However, what’s forgotten often enough, is that below the hype, below the machine learning algorithms and below the databases and technologies, we still have the same underlying principles.

The types of data, the mathematical methods we use to evaluate them, and the fundamental concepts thereof are unchanged – and understanding this is often the key between knowing whether and when to sample from your big data set, or not. This is more important than we realize, because sampling is not obsolete. Often, well collected samples of data may be more than sufficient for establishing or testing a certain hypothesis we may have.

In my view, newcomers to the data science and big data revolutions ought to consider a course in statistics, statistical thinking and statistical reasoning first. This lays the foundation for everything else that follows. The internet and most developed and even developing countries are awash with resources that can enable individuals to learn programming and computer-based problem solving, but critical thinking and statistical thinking seem to be harder skills to learn.

Statistical thinking not only requires a level of mathematical rigour but an ability to embrace notions of uncertainty, probabilistic thinking and a fundamental change in one’s notions of cause and effect. Perhaps this is a big step for many. The relative certainty of the logic of programming languages may actually be welcoming to many – which is probably also why we see more discussions about Hadoop and Spark and not enough discussions about statistical hypothesis testing or time series auto-correlation models.

So, if you want to cut through the hype, see data science for what it is, by breaking it up into its elements – the data (which may be coming in from ever more diverse sources), the tools (algorithms, computers) and the science (which is, in this case, statistics). Not everyone is a data scientist, as some articles on the web have begun to claim, but it isn’t only a specific set of skills that makes one a “data scientist”. Some say that these data scientists are glorified statisticians, some say that they’re statistically competent programmers well versed in machine learning, but the truth is probably somewhere in between.

Furthermore, data visualization – another aspect of the data science hype – is both an art and a science – which perhaps implies that you can both be enlightened and obfuscated by charts and graphs. In my view, knowledge/abilities in visualization alone doesn’t make you a data scientist (nor does, for instance, knowledge of machine learning methods alone or skills in programming R for ETL purposes alone). When you cut through the hype here, what’s pragmatic is to be able to acquire a wide array of skills – and depth in some. Like many engineers in fields of technology or engineering, who may have a wide swath of knowledge but expertise in only a few areas, this is the most likely role that most data scientists may have.

There’s definitely more that can be said about specific aspects of the data science “movement”, but what is certain is that a knowledge of the science of statistics underlying most of the science cannot be underestimated in its value and relevance in the present day. Statistics, hopefully, will become as important as learning a language or developing an ability to have conversations, or write a well argued paragraph.