The data privacy and compliance landscape continues to significantly change in 2022, and it is necessary to understand these changes as soon as possible so you can chart your path, and that of your organization, over the next few years.
EMERGING MEGATRENDS IN THE WORLD OF DATA
01. Increased regulatory activity.
In the last couple of years, the focus on personal data protection legislation has intensified across all major geographies. The legislation to secure the protection of data and privacy are in place in 71% of the UNCTAD countries (as of December 2021). This is just the beginning, because data has been assessed to be a strategic asset worldwide, and countries would like to regulate data to achieve their economic, social, and other objectives. GDPR (EU) itself is being reassessed for its effectiveness today. Regulations are becoming more specific and loopholes are getting tighter. The landmark CJEU ruling on the Schrems II case (EU), which declared the European Commission’s Privacy Shield Decision invalid on account of invasive US surveillance programs, the approval of CPRA (as an addendum to the CCPA (US), the new amendments to the APPI (Japan), and the progress on a draft of ADPPA (US), ePrivacy Regulation draft (EU), and Digital Services Act (DSA) (EU) are also a reflection of this development.
For the sectors which handle sensitive high-risk data, stringent and explicit privacy protection regulations will be implemented. For instance, in the finance industry, the FTC (US) announced its revisions to its GLBA Safeguards Rule in 2021. The updated rule requires highly prescriptive safeguards towards data protection and privacy. DORA (draft legislation) aims to boost the operational resilience of the financial services sector in the EU and complement the existing GDPR. This rapid, simultaneous progress in regulatory oversight on data across geographies is happening via the cross-pollination of ideas among regulators.
02. No data science without accountability.
Algorithmic accountability is being discussed across the world today. Transparency, explainability, fairness, and non-discrimination requirements are already being integrated into the law. Privacy policies are now beginning to focus on the entire data science value chain – extending to data handling and usage throughout design and development cycles. Processes towards privacy aware models / privacy aware systems / privacy preserving AI are gaining emphasis. The USA’s proposed Algorithmic Accountability Act would require organizations utilizing AI to establish processes for AI bias impact assessment and mitigation. The EU’s proposed AI Act would regulate high-risk AI systems, transparency requirements in AI systems, and ban AI systems that can have adverse impacts on privacy and safety. We expect that FDA-like institutions and pharma-like safety laws will be mandated across domains, and compliance alone will not be enough. With the growing popularity of content such as Netflix’s recent docudrama, The Social Dilemma, concerns around facial recognition, and the political machinations around TikTok have started picking up steam. As a result, consumers are becoming more aware, and issues such as data ownership, privacy, and accountability issues are becoming popular dinner-time conversation topics.
The battle will not be about whether consumers should share data with companies because for now, that answer will be a reluctant and judiciously-considered “yes”. The new battle lines will be drawn along WHICH companies can demonstrate responsible handling of data, internally (employees, shareholders) and externally (end-consumers, business partners, regulators).
03. A shift in the competitive landscape.
The last few years have seen companies that hoard specific classes of data and develop applications using this data. Their valuation has been driven by speculation (possible further applications), and the ownership of the said data. However, regulators and lawmakers are now looking at the question of who owns the data. If it is determined that data is to be a sovereign resource or owned by the individual, and not a private resource, then the companies turn into custodians and not owners of the data. If this comes to pass, the competitive advantage will shift from the collection of data to the use of data instead. The incentives will shift from investing in data moats to investing in data refineries that process, enrich, and apply raw data.
04. The changing economics of data science.
Production use of data science (i.e. where data science is on the frontlines – making or aiding business decisions to impact the top or bottom line growth) requires that ML models work reliably and in a real-world context – not just on training data. The demonstration of this is not only important to internal stakeholders such as the board and CXOs, but also to regulators and customers. The definition of a model “working” is now being expanded beyond how accurate or quick it is, to include ideas such as fairness, transparency and explainability amongst others. The result of this will be that the emphasis that has been laid on more complex models will be reversed. Simpler, maintainable, explainable and operationally efficient models will become the trend.
05. Data localization for security and independent access.
Governments around the world are now asserting authority over the personal, health, financial, and other critical data belonging to their citizens and residents. Legal frameworks to restrict cross-border flow of all/specific types of data are already in place in some countries (like, Russia and India). Many others are bringing in localization laws to have unfettered access to such data and to protect their data from foreign surveillance. Such laws are becoming increasingly crucial to every country’s national security, law enforcement, and economic competitive advantage.
06. Towards a level playing field.
Digital services (online platforms) are at the forefront of transforming our day-to-day experiences like communication, shopping, etc. Accelerating digitization has created an imbalance where a few large players (think companies like Google and Meta) have the ability to control the ecosystems in the digital economy. This imbalance gets magnified by the monopoly that they assert over the data that their platforms collect. The regulatory authorities will now also start to focus on measures to address this imbalance and create a level playing field for other businesses and create more choices for customers. For instance, the EU’s Digital Markets Act (DMA) along with Digital Services Act (DSA) aims to create a safer and more open digital space. Thus, establishing a level playing field to foster innovation, growth, and competitiveness – both in the European Single market and globally.
WHAT SHOULD YOU DO?
01. Be a responsible data company.
The concerns around the use of data and regulatory interest in ensuring a safe data ecosystem are real. Data is here to stay, and there is immense value to be created out of it. It makes sense to adopt the mindset, values, processes, and tools designed to keep all stakeholders’ trust and confidence at all times. This will help ensure that you stay in business, and can credibly signal trustworthiness to your customers as well. The reputational and legal costs of non-compliance can be high. Consumers too will likely start paying with their dollars for horses that come from high-integrity stables.
02. Focus on creating value through your data, rather than safeguarding it.
In the last few years, companies have built complex data systems with high engineering costs – not just in their development, but also their operation and maintenance. This has been driven by the desire to scale, especially on the data collection side. “More data” was the axiom. But now, this axiom comes with two new risks – (a) the risk of having to share the fruits of data collection with third parties, and (b) the risk of high privacy management costs if consumers invoke portability, erasure, or other such requests. Companies would be well-advised to take a balanced approach that emphasizes (a) data consumption and applications, as much as data collection, and (b) quality and incremental value of data, rather than quantity.
03. Build for transparency and trust.
GDPR-like laws require declarations of intent, usage of data, and consent. But more than that, one should assume that consumers, regulators, and employees will ask questions about the defensibility of data applications. Therefore, applications have to be built from the ground up while assuming that someone is always looking over your shoulder. Aim to exceed compliance requirements, and implement periodic data (engineering and science) audit processes to examine compliance, because there will be moving pieces (and people) within the organization that will continue to act as weak links.
04. Partner with responsible data practitioners.
When customers interact with, say, AirBnB and there is a privacy violation, they blame AirBnB. They don’t distinguish between employees, partners, and vendors. Given the complexity of modern data platforms, it is inevitable that there will be many individuals involved in the process, and risks propagate through the system. One should look to partner with or hire individuals and companies that share the common desire to be a responsible data company and who take data regulatory obligations seriously.
WHAT HAPPENS IF COMPANIES DON’T ADAPT?
01. Face continuous regulatory risks.
The risk of punitive fines in the face of data privacy violations and non-compliance is an ever-present risk with GDPR, CCPA and other laws. But more importantly, it brings the wrong kind of attention to the company that makes your stakeholders pause and re-evaluate their positions. Here’s an example: if an analyst, engineer, or even an automated system sends an email to a customer listed in some spreadsheet, and that customer happens to have asked for their information to be deleted (“forget me”), it will be a violation of GDPR. The discovery process will place the systems, processes, and attitude of the company towards customer data in public view. The intent to adhere to the letter and spirit of the law, or lack thereof, will be clear.
02. Repel the best data talent.
When the fairness, equity, and transparency of ML models, analyses, or data products are questioned, employees who build these systems will be asked to defend them due to the technical nature of the work. This potential “exposure” will make employees cautious about which companies they choose to work for. They will look for legal, engineering, and process excellence so they can be comfortable doing their job. The best data talent will also look for the best data process companies to amplify their resumes, rather than risk playing mechanic at an organization where a lot of the data processes need fixing.
03. Lose business.
If data sharing laws come into effect, the value of a company will be driven by data enrichment/preparation and downstream models, rather than by the raw data it owns. This could significantly alter the economics of the company, and potentially put it out of business, especially if a data moat was their sole competitive advantage.
FINAL THOUGHTS
To summarize, there is an essential need now to completely rethink data-driven competitive advantages. The writing on the wall leaves no room for ambiguity around “whether” data compliance trends will converge to strict regulations or not. Rather, it is only a question of “when”. Customers, revenue growth as well as the best talent, will flow naturally toward companies who can demonstrate that they are responsible custodians of data.