fbpx
Connect with us

Tech

Executive interview: Jeetu Patel, general manager of collaboration and security, Cisco

Published

on

Executive interview: Jeetu Patel, general manager of collaboration and security, Cisco

Melisa Osores

By

Published: 01 Jun 2022 14:30

Job and development opportunities for people globally are unevenly distributed, but human potential is not. If we want to solve the problems that plague human beings and businesses, we should leverage technology to make opportunities follow human potential, not the other way round.

That is the vision of Jeetu Patel, executive vice-president and general manager of collaboration and security at Cisco, who, in an interview with Computer Weekly, shared his thoughts on the future of collaboration and the hybrid work model.

Patel, passionate about the possibilities that technology brings for inclusion, reviewed the main challenges that need to be solved today with respect to technological tools, and spoke of the need for proper change management as well as to promote a mindset and cultural preparation aligned with the advantages of the hybrid model, to be able to move towards a future in which immersive virtual meetings are the rule, and geography and distance do not matter when working as a team.

Patel also discussed the importance of never losing sight of security and privacy in the evolution of collaborative technology.

Editor’s note: This interview has been edited for clarity and conciseness.

Computer Weekly: What do you think are the main challenges of the hybrid work model right now, considering both technological and cultural aspects?

Jeetu Patel: If you look at what the future is going to be – and we definitely think it’s going to be hybrid – people will sometimes choose to work from home, sometimes they’ll work in the office, sometimes somewhere in between. But while everyone wants to work in a hybrid model, this model has a lot of challenges and it doesn’t work as fluidly yet. That is why it is harder to work in a hybrid model than it was when everybody was in the office or everybody was at home, for that matter.

I’ll give you a couple of quick examples of where these challenges lie with hybrid. Imagine that you’re in a hybrid mode, which means four people are in a conference room and three are at home or somewhere else, remotely. One of the challenges you have, with people sitting together, is that a very natural thing for them to do is to get up and start drawing on the whiteboard, right? And the people who are at a distance have no idea what’s going on. It’s very hard to follow the conversation if they’re pointing at something, but you don’t know what they’re actually pointing at.

Another challenge is if people are sitting at a long table and there are a couple sitting at the back of that table. When you have that long table and someone sitting at the back of the table, you may not be able to see that person’s facial expressions or body language, so you don’t feel like you’re connecting with them. So the people in the room feel left out and the people who are remote feel left out.

Another problem you might have is audio. If someone is sitting in the back of the room, you can’t hear them as well as someone sitting closer to the microphone. And the reality is that without good audio, you can’t have a good meeting.

These are all examples of practical challenges that people face in hybrid [models] all the time. If you don’t solve these problems, one of two things will happen – either you have someone who feels like a second-class participant, who doesn’t feel like they’re in on the action, or the only way people succeed is when they’re all together.

Both of those things are bad outcomes for society, because what you want is to have an inclusive world, where people are able to participate in a global economy regardless of where they are, regardless of what language they speak, regardless of what socio-economic status they have. What you want to have is: “Hey, it doesn’t matter if I’m at home or at work – work is not where I am, work is what I do.”

These kinds of challenge have to get solved because the implications of not solving them tend to be far greater than we might think. What happens is people start to go back to saying: “Hey, look, we need to all be together, otherwise we can’t be productive.” And one thing we’ve learned in the past two years is that that is not true for most jobs, where you can be anywhere in the world and participate. Wouldn’t it be beautiful if you happen to be in a village in Bangladesh and have the same access to opportunities as someone in Silicon Valley?

That future, in my mind, is something that we, as humans, owe it to ourselves to create. And hybrid is the first step in creating that future, but you can’t make it happen if you don’t overcome the challenges. Our job, as technology provider companies, is to solve those problems.

So that if it turns out you want to get up and start drawing on a whiteboard, you can do it on a digital whiteboard and everybody can draw with you. And if you can’t see somebody in the back of the room, the camera automatically knows to create an individual video stream with you so we can zoom into you. If it turns out that you’re not able to hear someone well because they are away from the microphone, the system should be smart enough to equalise voices. Those kinds of technology are hard computer science problems that we have actually solved at Cisco, so we’re really excited about what this can do for us.

Do you think companies that aren’t taking these steps towards that goal don’t feel they need to invest as much in that kind of technology to create these kinds of environments? Do you think they lack a culture or have a different mindset?

Patel: Well, for any technology to go mainstream, there are a few things that have to happen. One is that the technology has to work. It can’t be too difficult to use that technology, that’s the first thing. Number two, it has to be affordable. It cannot break the bank as you are doing it, so the cost and the return on investment have to be very obvious. And, most importantly, the change management and the cultural readiness have to actually be developed within the company.

The first two are obvious: the technology has to work and it has to be affordable. The third is a little more nuanced. And what I mean by that is you have to get people to overcome the mental model that says: “Geography does not determine someone’s contribution level, but output determines someone’s contribution level.”

Ideally, geography should not be a limiter for output. In fact, it should be an accelerator to output, so that wherever you are and wherever you feel most comfortable, you should be able to provide the right level of contribution. That requires that you get companies to think that way, that managers think that way, that people who are participating from other locations don’t feel guilty about not being in the same place. And it takes a while to make sure those things happen.

“You fundamentally do not create an inclusive world if you mandate and require that people have to be in a certain location to get the job done”

Jeetu Patel, Cisco

You fundamentally do not create an inclusive world if you mandate and require that people have to be in a certain location to get the job done. I could be a single parent and be trying to take care of my child and I want to still make sure that I can contribute to society and work. I should be allowed to do that today, but today that creates a lot of constraints for people, and they have to choose between taking care of their child or working, and those should not be mutually exclusive choices. They should be things that you should be able to do, both.

We have to make sure that the culture is ready and that the cultural shift is happening. All of us have to fight for it, but you can’t fight for that unless the technology is working flawlessly and it is affordable, and so those are the pieces. The good news is that people have tasted the honey now. It has been proved that there is no debate on that front any more.

So now, some people like to work together, some people like to work from home, and giving that flexibility will really allow you to get the best kind of people. And eventually, business is just a direct reflection of one factor: do you have the best people working for you? You will attract and be able to get the best talent in the world when you give them all the flexibility to work from anywhere for your company.

Following this idea, how do you see the future of collaboration in this hybrid work model with all the advances in tools, such as using the cloud, having natural speech recognition, speech-to-text conversion, noise reduction technologies and artificial intelligence?

Patel: If you think about the last two years, we have made a tremendous amount of progress. In fact, it’s night and day. Imagine if the pandemic had happened 25 years ago. Life would have been painful. It was painful right now, but life would have been much worse 25 years ago, because it’s not just about the productivity of people being able to get stuff done, it is that people have an intrinsic need to feel connected and video makes people feel connected because I can see you, I can talk to you, I can see your expressions. I can see and feel the emotion and that actually has a lot to do with it, but we’re just getting started.

If you fast-forward 10 years, I don’t think people will collaborate just by looking at two-by-two boxes on a two-dimensional screen. That’s not going to be the idea of the eventuality of how people engage to each other. I think there’s going to be much more sophistication around that. The human brain won’t be able to tell the difference between whether you’re sitting in front of someone in real life or you’re sitting in front of someone in virtual form. The brain will start to forget and the technology will disappear. And that makes the future very, very exciting.

The basics are going to be there – people will trust the system, so that no one is going to feel their security will be compromised or their privacy will be compromised. That’s something that all of us have a responsibility to do because privacy is a basic human right. The need to feel secure with your intellectual property, with your identity, with all those pieces, is really needed for people to be able to experience a non-oppressive world. So security and privacy will be pretty important.

But the immersiveness of the experience will also be very important. And you won’t feel restricted. Right now, I don’t feel as free in the virtual world as I do in the real world, and that freedom should not only be as good as the real world, but we should actually make it 10 times better than the real world. We can make sure that people not only feel free to walk around and the cameras follow them, but all of a sudden, they can think of an object and pull the object virtually, and both of you can start to manipulate that object. That is an immersive experience.

So even though we’re 10,000 miles away from each other, we’re designing a car together and the model of the car is something that both of us can edit in 3D space, and move it around, and that’s happening together. And that’s pretty magical.

Also, sometimes I don’t want to talk to you in synchronous mode. In some cases, in business, I just want to go out and give someone an update, and that can happen asynchronously. So collaboration will not only be synchronous, but also asynchronous.

Artificial intelligence will be embedded into everything you do, so language will not be a barrier. I’m speaking in English, but you want to hear me and you want to see what I’m seeing in Spanish – that should be something the system does automatically. Guess what? We do that today, so you can see the subtext right now on speech-to-text conversion and real-time translation in 108 languages.

Those are the things that I think make these experiences get infinitely better. There are going to be thousands of these small things that we will continue to keep doing and improving upon. But the end goal is that the human brain doesn’t feel a cognitive load as a result of being virtual. That’s the end goal.

And if you don’t feel the cognitive burden, then geography and distance won’t matter. And when geography and distance don’t matter, anyone from anywhere can participate in a global economy without any penalty. And that opens up the world to three billion digital workers.

Anyone with an idea will be able to solve a problem because opportunity right now is very unevenly distributed, but human potential is not. Human potential is very evenly distributed, so we should follow the human potential and its even distribution, and make the opportunity follow potential rather than the other way round.

Security is an important part for the hybrid and collaborative environment. What do you think are the main security issues that enterprises need to address, or at least be aware of in this environment? Are people still the weakest link in security?

Patel: This is a very important issue for society to handle because there are a few things that are happening in parallel that are compounding the problem. For one, people are working from anywhere. Number two, they are working with applications in the cloud. Number three, the threat actors are getting more sophisticated – it used to be hackers and now they are nation states.

And the consequences of those threats are far more dire than they used to be, because before, it was a virus that got into your computer and it was a little bit of a pain, but now you can not only lose trade secrets, but you can lose lives, hospitals can shut down, transportation networks can shut down, the financial system can shut down, the water supply can shut down or power grids can shut down.

When you start thinking about that, the implications are very high and the scary part is that the bad guys, the attackers, the adversaries, have to get it right once, but the good guys, the ones protecting the environment, have to get it right every single time. So, it’s extremely important at the time we live in. Apart from climate change, there are not too many problems in the world that are more important than how to make sure that the world is safe from cyber warfare and cyber security because even warfare starts with cyber before it gets to land, air and sea.

We know that the threat landscape is getting broader. Adversaries are becoming more sophisticated, and then there is another problem – four million jobs a year go unfilled in the security world. So there is a huge shortage of skilled labour.

What do you do in this scenario? Well, security has to become more of an automation and data problem, rather than a people problem. Yes, you need skilled people, but what you need is for security to be built in such a way that there is a network effect in security.

What do I mean by a network effect? It means that the more people I have, the more signals I can collect from people, and the more signals I collect, the better I can detect threats effectively. And the more I can detect threats, the better I can remediate them. And the more I remediate the threats that are out there, the more people will come to me for security. And that cycle starts all over again.

That’s a network effect – the more people who use a security solution, the more valuable it becomes to everyone who uses it. And in security, it’s a game of scale.

The more you can go out and protect the world through automation and machine learning and proactive detection and remediation and prevention, the better off you will be. I think there will be a tremendous amount of innovation in the next decade or two in security, on both fronts – there will be innovation in how you protect an organisation and there will be innovation in how to attack an enterprise.

We have to make sure we out-innovate the attackers, and that only happens with an extreme focus on building technology. This is a problem that can only be solved with technology. You have to build a lot of technology that actually helps prevent threats from occurring and, when they do occur, it detects them very quickly and responds and remediates them as quickly as possible.

That is the goal of security – prevent threats, but when you can’t prevent them, detect them immediately so you know something is wrong, and then respond, remediate and record in real time or near-real time. That’s what this industry will evolve into.

If you do that right, lives will be saved, and if you do it wrong, people will suffer. So every security provider has a tremendous responsibility and we need to make sure we do our part for the betterment of society.

Sometimes, small companies think they are not important enough to be attacked, but they are also suppliers or part of the supply chain of a larger company, or a public organisation and they are the gateway that allows attackers to access.

It’s the lowest common denominator approach. So, you can’t say there are going to be some companies that are simply not sophisticated enough, so they shouldn’t be protected. In fact, security has to be usable by every individual, and it can’t be intimidating. It can’t be scary. It has to be something that is comprehensible and understandable, so that everyone knows that the majority of the attacks that happen today and most of the breaches that occur are not due to malicious behaviour by an employee, but because of negligence.

No one comes into the office saying: “Today I’m going to be negligent.” But what ends up happening is that when the technology is too complicated, people don’t understand it and mistakes are made. So, the negligence is a byproduct of the complexity of the systems. What you have to do is simplify security in a way that negligent behaviour doesn’t compromise the company, the individual and the data.

Do you think security needs to be embedded into technology products by default?

Patel: Security has to be integrated, transparent and intuitive. It is none of those things today. It is complicated, it is opaque, and it is completely not intuitive. And that’s the problem we have to solve as an industry.





Read more on Unified communications

Go to Source

Click to comment

Leave a Reply

Tech

Why trusted execution environments will be integral to proof-of-stake blockchains

Published

on

Why trusted execution environments will be integral to proof-of-stake blockchains

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


Ever since the invention of Bitcoin, we have seen a tremendous outpouring of computer science creativity in the open community. Despite its obvious success, Bitcoin has several shortcomings. It is too slow, too expensive, the price is too volatile and the transactions are too public.

Various cryptocurrency projects in the public space have tried to solve these challenges. There is particular interest in the community to solve the scalability challenge. Bitcoin’s proof-of-work consensus algorithm supports only seven transactions per second throughput. Other blockchains such as Ethereum 1.0, which also relies on the proof-of-work consensus algorithm, also demonstrate mediocre performance. This has an adverse impact on transaction fees. Transaction fees vary with the amount of traffic on the network. Sometimes the fees may be lower than $1 and at other times higher than $50.

Proof-of-work blockchains are also very energy-intensive. As of this writing, the process of creating Bitcoin consumes around 91 terawatt-hours of electricity annually. This is more energy than used by Finland, a nation of about 5.5 million.

While there is a section of commentators that think of this as a necessary cost of protecting the entire financial system securely, rather than just the cost of running a digital payment system, there is another section that thinks that this cost could be done away with by developing proof-of-stake consensus protocols. Proof-of-stake consensus protocols also deliver much higher throughputs. Some blockchain projects are aiming at delivering upwards of 100,000 transactions per second. At this performance level, blockchains could rival centralized payment processors like Visa.  

Figure 1: Validators

The shift toward proof-of-stake consensus is quite significant. Tendermint is a popular proof-of-stake consensus framework. Several projects such as Binance DEX, Oasis Network, Secret Network, Provenance Blockchain, and many more use the Tendermint framework. Ethereum is transitioning toward becoming a proof-of-stake-based network. Ethereum 2.0 is likely to launch in 2022 but already the network has over 300,000 validators. After Ethereum makes the transition, it is likely that several Ethereum Virtual Machine (EVM) based blockchains will follow suit. In addition, there are several non-EVM blockchains such as Cardano, Solana, Algorand, Tezos and Celo which use proof-of-stake consensus.  

Proof-of-stake blockchains introduce new requirements

As proof-of-stake blockchains take hold, it is important to dig deeper into the changes that are unfolding.  

First, there is no more “mining.” Instead, there is “staking.” Staking is a process of putting at stake the native blockchain currency to obtain the right to validate transactions. The staked cryptocurrency is made unusable for transactions, i.e., it cannot be used for making payments or interacting with smart contracts. Validators that stake cryptocurrency and process transactions earn a fraction of the fees that are paid by entities that submit transactions to the blockchain. Staking yields are often in the range of 5% to 15%.  

Second, unlike proof-of-work, proof-of-stake is a voting-based consensus protocol. Once a validator stakes cryptocurrency, it is committing to staying online and voting on transactions. If for some reason, a substantial number of validators go offline, transaction processing would stop entirely. This is because a supermajority of votes are required to add new blocks to the blockchain. This is quite a departure from proof-of-work blockchains where miners could come and go as they pleased, and their long-term rewards would depend on the amount of work they did while participating in the consensus protocol. In proof-of-stake blockchains, validator nodes are penalized, and a part of their stake is taken away if they do not stay online and vote on transactions.  

Figure 2: Honest voting vs. dishonest voting.

Third, in proof-of-work blockchains, if a miner misbehaves, for example, by trying to fork the blockchain, it ends up hurting itself. Mining on top of an incorrect block is a waste of effort. This is not true in proof-of-stake blockchains. If there is a fork in the blockchain, a validator node is in fact incentivized to support both the main chain and the fork. This is because there is always some small chance that the forked chain turns out to be the main chain in the long term. 

Punishing blockchain misbehavior

Early proof-of-stake blockchains ignored this problem and relied on validator nodes participating in consensus without misbehaving. But this is not a good assumption to make in the long term and so newer designs introduce a concept called “slashing.” In case a validator node observes that another node has misbehaved, for example by voting for two separate blocks at the same height, then the observer can slash the malicious node. The slashed node loses part of its staked cryptocurrency. The magnitude of a slashed cryptocurrency depends on the specific blockchain. Each blockchain has its own rules.  

Figure 3: Misbehaving validators are slashed by other validators for reasons such as “Attestation rule offense” and “Proposer rule offense”

Fourth, in proof-of-stake blockchains, misconfigurations can lead to slashing. A typical misconfiguration is one where multiple validators, which may be owned or operated by the same entity, end up using the same key for validating transactions. It is easy to see how this can lead to slashing.  

Finally, early proof-of-stake blockchains had a hard limit on how many validators could participate in consensus. This is because each validator signs a block two times, once during the prepare phase of the protocol and once during the commit phase. These signatures add up and could take up quite a bit of space in the block. This meant that proof-of-stake blockchains were more centralized than proof-of-work blockchains. This is a grave issue for proponents of decentralization and consequently, newer proof-of-stake blockchains are shifting towards newer crypto systems that support signature aggregation. For example, the Boneh-Lynn-Shacham (BLS) cryptosystem supports signature aggregation. Using the BLS cryptosystem, thousands of signatures can be aggregated in such a way that the aggregated signature occupies the space of only a single signature.  

How trusted execution environments can be integral to proof-of-stake blockchains  

While the core philosophy of blockchains revolves around the concept of trustlessness, trusted execution environments can be integral to proof-of-stake blockchains.  

Secure management of long-lived validator keys  

For proof-of-stake blockchains, validator keys need to be managed securely. Ideally, such keys should never be available in clear text. They should be generated and used inside trusted execution environments. Also, trusted execution environments need to ensure disaster recovery, and high availability. They need to be always online to cater to the demands of validator nodes.  

Secure execution of critical code

Trusted execution environments today are capable of more than secure key management. They can also be used to deploy critical code that operates with high integrity. In the case of proof-of-stake validators, it is important that conflicting messages are not signed. Signing conflicting messages can lead to economic penalties according to several proof-of-stake blockchain protocols. The code that tracks blockchain state and ensures that validators do not sign conflicting messages needs to be executed with high integrity.  

Conclusions

The blockchain ecosystem is changing in very fundamental ways. There is a large shift toward using proof-of-stake consensus because it offers higher performance and a lower energy footprint as compared to a proof-of-work consensus algorithm. This is not an insignificant change. 

Validator nodes must remain online and are penalized for going offline. Managing keys securely and always online is a challenge. 

To make the protocol work at scale, several blockchains have introduced punishments for misbehavior. Validator nodes continue to suffer these punishments because of misconfigurations or malicious attacks on them. To retain the large-scale distributed nature of blockchains, new cryptosystems are being adopted. Trusted execution environments that offer disaster recovery, high availability, support new cryptosystems such as BLS and allow for the execution of custom code with high integrity are likely to be an integral part of this shift from proof-of-work to proof-of-stake blockchains.

Pralhad Deshpande, Ph.D., is senior solutions architect at Fortanix.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Go to Source

Continue Reading

Tech

How NFTs in the metaverse can improve the value of physical assets in the real world

Published

on

How NFTs in the metaverse can improve the value of physical assets in the real world

Interested in learning what’s next for the gaming industry? Join gaming executives to discuss emerging parts of the industry this October at GamesBeat Summit Next. Learn more.


The metaverse has become inseparable from Web3 culture. Companies are racing to put out their own metaverses, from small startups to Mark Cuban and, of course, Meta. Before companies race to put out a metaverse, it’s important to understand what the metaverse actually is.

Or what it should be.

The prefix “meta” generally means both ”self-referential” or “about.” In other words, a meta-level is something about a lower level. From dictionary.com: 

“-a prefix added to the name of a subject and designating another subject that analyzes the original one but at a more abstract, higher level:

metaphilosophy; metalinguistics.

a prefix added to the name of something that consciously references or comments upon its own subject or features:

a meta-painting of an artist painting a canvas.

The key aspect of both definitions is self-reference. Logically, the term “metaverse” then should be “a universe that analyzes the original one, but at an abstracted level.” In other words, the metaverse will be an abstraction layer that describes our current physical world. 

The metaverse should be an extended reality, not a whole new one. 

And that’s why the trend has been heading toward a metaverse that’s built on crypto. Crypto, just like the world, has a kind of physical nature to it. You can’t copy a Bitcoin or an NFT. Just like the coffee cup on your desk can’t occupy the same physical space as the cup next to it. The space itself is singular and immutable and can’t be copied. Even if you make a 3D-printed replica, it’s not the same cup. So crypto is very well suited to building an immutable layer that describes the real world. In crypto, we can build models of the real world that carry over many of its properties.

The natural opportunity will be in digital twins. Digital twins create a universe of information about buildings or other physical assets and are tied to the physical world. In other words, they are that meta-layer. By integrating blockchain technology, in the form of NFTs, all data and information surrounding the physical twin can be verified and saved, forever, all tracked with the asset itself. When you think about it, digital twins are the metaverse versions of the physical twins, and the technology enhances features of the real world. 

Validation is the key to metaverse truth

When evaluating crypto/blockchain’s relationship to the metaverse, it’s important to remember that crypto is about verification and validation. So when considering blockchain’s relationship to the metaverse, it makes sense to think about it as a digital space that can be validated. 

So in the metaverse, it’s time to expand on what an NFT is and what it can hold. NFTs cannot be copied because they are tied to the validation and verification process in time, which is what makes them nonfungible. As the capabilities of NFTs grow, they are becoming a new information dimension that is tied to the real world.

NFT domains are going to be core to this idea. They become a nonfungible data space, uniquely tied to us and our activity on Web3. In the metaverse, these domain NFTs can represent a house; recording and validating every visitor, repair, event, etc. And that record and that infrastructure can be sold not just with the house but as a core component of the house, increasing the value.

By clearly defining what a true metaverse is, both for developers and investors, we can start to move toward a meaningful version of it. 

Leonard Kish is cofounder of Cortex App, based on YouBase’s distributed date protocol.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Go to Source

Continue Reading

Tech

Protecting the modern workforce requires a new approach to third-party security

Published

on

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


Ask any HR leader: they’ll tell you that attracting and retaining employees continues to be a top challenge. While this has never been easy, there’s little doubt that the COVID-19 pandemic (and distributed workforces) have made things even more complex. As you read this article, many workers are actively considering leaving their current roles, which don’t support their long-term goals or desired work-life balance. While organizations attempt to navigate this “Great Resignation,” more than 4 million workers are still resigning every month.

As 2022 marches on, hiring teams face another massive obstacle: global talent shortages. These trends have companies rushing to find creative stop-gap solutions to ensure business continuity in difficult times. It shouldn’t come as a surprise that more companies are relying on third-party vendors, suppliers and partners to meet short-term needs, reduce costs and keep innovation humming. In addition, the rise of the gig economy has more employees entering into nontraditional or temporary working relationships. This trend is particularly prevalent in the healthcare industry, but as many as 36% of American employees have a gig work arrangement in some form, either alongside or instead of a full-time job. 

What’s more, the corporate supplier ecosystem has become exponentially more complex. Amidst the supply chain vulnerabilities revealed by the pandemic, organizations are expanding and diversifying the number of supplier relationships they’re engaging in. Meanwhile, regulators have stepped up efforts to manage these business ecosystems.

In many cases, outsourcing to temporary workers or external partners makes good business sense. Sometimes, given the constraints of the talent pool, there’s simply no other option for a company. Either way, organizations should be aware of the security risks that third parties bring — and the steps they can take to minimize the chances of a breach occurring. 

Third-party security challenges remain prevalent

Bringing a third-party workforce onboard in a rushed way – and without proper governance or security controls in place – leaves organizations open to significant cyber risk. These risks can stem from the third-party users or suppliers themselves or those third parties’ access becoming compromised and used as a conduit for lateral movement, enabling attackers to access the company’s most sensitive data. Sadly, a lack of centralized control over suppliers and partners is all too common, no matter the industry. In many organizations, unlike full-time employees, third-party users are managed on an ad hoc basis by individual departments using manual processes or custom-built solutions. This is a recipe for increased cyber risk.

Take the now-infamous Target breach, which remains among the largest-scale third-party security breaches in history. In this incident, attackers made their way onto the retail giant’s network after compromising login credentials belonging to an employee of an HVAC contractor, eventually stealing 110 million customers’ payment information. 

In today’s world, where outsourcing and remote work are now the norm, third parties require corporate network access to get their jobs done. If companies don’t reconsider third-party security controls – and take action by addressing the root of the problem – they’ll remain open to cyber vulnerabilities that can devastate their business and its reputation.

A pervasive lack of visibility and control

Although reliance on third-party workers and technology is widespread in nearly every industry (and in some, it’s common for an organization to have more third-party users than employees), most organizations still don’t know exactly how many third-party relationships they have. Even worse, most don’t even grasp precisely how many employees each vendor, supplier or partner brings into the relationship or their level of risk. According to one survey conducted by the Ponemon Institute, 66% of respondents have no idea how many third-party relationships their organization has, even though 61% of those surveyed had experienced a breach attributable to a third party. 

Grasping the full extent of third-party access can be particularly challenging when there’s collaboration with outsiders through cloud-based applications like Slack, Microsoft Teams, Google Drive or Dropbox. Of course, the adoption of these platforms skyrocketed with the large-scale shift to remote and hybrid work that has come about over the last two years.

Another challenge is that although an organization may try to maintain a supplier database, it can be near-impossible to ensure that it’s both current and accurate with current technical capabilities. Because of processes like self-registration and guest invites, external identities remain disconnected from the security controls applied to employees. 

Growing regulatory interest and contractual obligations

As incidents and breaches attributable to third parties continue to rise, regulators are taking notice. For instance, Sarbanes-Oxley (SOX) now includes several controls targeted explicitly at managing third-party risk. Even the Cybersecurity Maturity Model Certification (CMMC) explicitly targets improving the cybersecurity maturity of third parties that serve the federal government. The ultimate goal of such regulations is to bring all third-party access under the same compliance controls required for employees so that there’s consistency across the entire workforce and violations can be mitigated quickly.

Today, we expect companies to push their suppliers, vendors and partners to implement more stringent security controls. In the long run, however, such approaches are unsustainable, since it’s difficult, if not impossible, to enforce standards across a third-party organization. Hence, the focus will need to shift to ensuring that identity-based perimeters are robust enough to identify and manage threats that third parties may pose.

Currently, decentralized identity solutions are moving into the mainstream. As these technologies become more widely accepted, they’ll continue to mature. This will help many organizations streamline third-party management in the future. It will also assist companies on their journey toward zero trust-compatible identity postures. Incorporating ongoing security monitoring and implementing continuous identity verification systems will also become increasingly important. 

Five steps to mitigate third-party risk today

Today’s challenges are complex but not unsolvable. Here are five steps organizations can take to improve third-party access governance over the short term.

1) Consolidate third-party management. This process can begin with finance and procurement. Anyone with any contract to provide services to any department in the company should be identified and cataloged in an authoritative system of record that includes information on the access privileges assigned to external users. 

Security teams should test for stale accounts and deprovision any that are no longer needed or in use. In addition, they should assign sponsorship and joint accountability to third-party administrators.

2) Institute vetting and risk-aware onboarding processes. Both the organization and its supplier/vendor need to determine workflows for vetting and onboarding third-party users to ensure they are who they say they are — and that their onboarding process follows the principle of least privilege. Implementing a self-service portal where third-party users can request access and provide required documentation can smooth the path to productivity. Access decisions should be based on risk.  

3) Define and refine policies and controls. The organization — and its vendors and suppliers — should continuously optimize policies and controls to identify potential violations and reduce false positives. Policies and controls must be tested periodically, and security teams should also review employees’ access. Over time, auto-remediation can minimize administrative overhead further.

4) Institute compliance controls for your entire workforce. Look for a third-party access governance solution that will enable consistency across employees and third-party users, especially since regulators increasingly require this. Having access to out-of-the-box compliance reports for SOX, GDPR, HIPAA and other relevant regulations makes it easier to enforce the appropriate controls and provide necessary audit documentation.

5) Implement privileged access management (PAM). Another critical step that organizations can take to boost their cybersecurity maturity is implementing a PAM solution. This will enable the organization to enforce least privileged access and zero-standing privilege automatically across all relevant accounts. 

The world of work will never again look like it did in 2019. The flexibility, agility and access to first-rate talent that businesses gain from embracing modern ways of working make the changes more than worthwhile. And enterprises can realize enormous value within today’s complex and dynamic business relationship and supplier ecosystems. They need to ensure their cybersecurity strategies can keep up by strengthening identity and third-party access governance.

Paul Mezzera is VP of Strategy at Saviynt.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Go to Source

Continue Reading
Home | Latest News | Tech | Executive interview: Jeetu Patel, general manager of collaboration and security, Cisco
a

Market

Trending