The Cybersecurity Crowd #13

AI (ChatGPT) and the Cybersecurity Implications for Your Business

With AI set to revolutionize how we work in the coming years, two of our Virtual Chief Information Security Officers, Darrin Maggy and Nick McCourt, and our CIO, Tony Miller decided to weigh in on the subject. The drumbeat to adopt AI in your workplace is loud right now. Should you pull the trigger?

Our answer to that is a qualified yes—with some caveats. Whether you’re implementing major AI tools and processes or your employees are playing around with a $20/mo ChatGPT subscription, there are a lot of cybersecurity worries that come along with using the technology. Most can be mitigated with the right protections and cybersecurity reviews. Either way, we’ve got a lot of advice to give on this podcast! Give it a listen!

Check out a ChatGPT summary and the full transcript below and listen along with the embed, Spotify, Apple Podcasts, YouTube, or find us on your favorite podcast app.

 

Implementing AI Safely: Our Points

After this podcast was recorded, we took the transcript and ran it through ChatGPT, asking it to parse out all our relevant points. Here’s its interpretation of what we said… which, for the most part, is pretty spot on:

  • AI can automate tasks such as assembling reports, creating content, generating art, and providing web analysis.
  • AI can be used in conjunction with cybersecurity measures to speed up mean time to remediation.
  • Companies should be cautious about the information collected by AI tools and ensure proper control and protection of sensitive data.
  • Using AI tools like ChatGPT for business purposes requires careful consideration of data security and copyright issues.
  • Companies should establish policies and procedures for the use of AI tools and educate employees on their proper use.
  • AI tools can have unintended consequences, and companies should be aware of potential risks and plan accordingly.
  • The use of AI tools may impact existing policies and documentation within a company.
  • AI can lower the barrier to entry for malicious activities and pose challenges for cybersecurity.
  • Companies should assess the storage and backup mechanisms of AI tools and consider disaster recovery plans.
  • Careful risk assessment and cautious implementation are crucial when introducing AI to a company.
  • Companies should start by experimenting with non-sensitive data and gradually incorporate more sensitive information.
  • Coding should be approached with caution when using AI, and specific client information should be avoided.
  • Be mindful of copyright issues and the potential for AI-generated content to be based on copyrighted material.
  • Ongoing education and adaptation to the evolving AI landscape are necessary for the effective and responsible use of AI tools.

Nice job, ChatGPT! We made sure to say thank you.

Get our free AI policy template for your business Your team is using AI tools like Copilot and ChatGPT to handle work. Make sure usage is ethical and secure with our free AI Acceptable Use Policy Template.  

 

Transcript

Susan Gosselin: Hello everyone and welcome to the Cybersecurity Crowd Podcast. I’m Susan Gosselin with Integris Marketing, and I am here today with some of our top vCISOs, our Virtual, Chief Information Security Officer, as well as Tony Miller, our CIO at Integris, and we have pulled out all the big guns today to talk to you because we are talking about one of the hottest topics right now, AI and what that means to your business. There are a lot of things that AI can do for your company and everyone’s talking about it, and there’s all these AI tools that are flooding the market right now.

Everybody is talking about how, according to the world economic. Forum that you could have a change of. At least 28, 29% of all jobs are going to be eliminated or changed by AI. There’s a lot of big claims that are being made, but in short, what you’re seeing with AI right now are basically these four things.

You’re automating all kinds of drudgery work, so assembling reports, doing PowerPoints and that kind of thing. You’re writing content for your businesses, letters, summaries, thought leadership pieces, that kind of thing. It’s creating art from scratch. Novels, photos, videos, you name it. It can also take written pieces and turn those into fully done videos with moving text and summaries, and they can also provide an analysis for what on the web. So AI is basically the actual Jeeves of Ask Jeeves. Remember all of us old head that’s out there that remember the birth of the internet and the Ask Jeeves that was like an internet butler that was providing you with results.

Now AI really is like an internet butler that is providing you with not only the information, but an analysis of the information. And that part is new. So as you can see, there’s lots of implications for this, for business and lots of different tools that are being made available for business. So let’s drill down, shall we?

All right. So guys, I’m gonna, Pop out. The first question here for y’all and ask you, what in your mind is the implication of using cybersecurity and ha also having AI at your business? Do the, does AI automatically. Your cybersecurity, and so what do you think AI is capturing in IT systems and what is that going to mean to you? So Darren, let’s start with you on that one.

Does AI automatically thwart your cybersecurity posture?

Darrin Maggy: In terms of AI being used in the workplace or in systems, it depends on the use case, more than anything else. So first and foremost, you have to take a look at the use case. If AI is being used in conjunction with a detective, control, that’s a very good thing because it speeds up your mean time to remediation. So that’s a very good thing. If somebody in sales is using it, maybe it becomes a little more dubious. We really need to have some controls around it based on individual use case.

Susan Gosselin: So Nick, how. How scared should a company be about the information that AI is collecting when your employees are interacting with it?

Should companies be scared of what information AI is collecting?

Nick McCourt: I don’t think that scared is the right word. I think careful. I think cautious would be good words to describe it. It, it really is, about figuring out how to safely control the use so that you’re enabling your employees to work smarter, not harder, but not give away information that is needed in order to function.

Susan Gosselin: Yeah. That, that gets me to thinking about a use case that was coming up just the other day. So I think particularly if you are tasked with doing any kind of writing for your company, the temptation to use ChatGPT or similar tools is overwhelming. Let’s just say, you’re the director of sales and it’s time for you to give a report to the C-Suite about.

How things are going right now and how things are going right now for the company versus the competition. And they need to provide a, put together a PowerPoint and an executive summary and you’ve got a half an hour to do it and you’re under the gun and you’re not feeling it. And it would just be so easy to take that data, load it in to ChatGPT and have them spit out your PowerPoint bullet points and your executive summary and for you to cut and paste and have it all be done. But the problem is where is that data going? We don’t know. And until you let your employees know that is dangerous and that they can’t do that, there could be that kind of thing happening all over the company and you would have no way of knowing whether your data is secured at all.

So Nick, how scared should a company be when it comes to the idea of their employees just hopping on ChatGPT and other AI enabled tools to do their daily work. What should a company’s response be to all that?

Is ChatGPT a legitimate business tool?

Nick McCourt: I don’t think that they should be scared, but they should definitely be concerned and cautious and focus on controlling it in a way that allows employees to successfully work smarter, not harder and still not give away any important proprietary information.

Susan Gosselin: Tony, I’m going to bounce this to you because in your position as CIO at Integris, I know you’re thinking hard about what tools can you use and not use what safeguards can do we have and not have? What have you been finding in some of your research?

Should CIOs be considering the impact of AI? What tools can be used versus should not be used?

Tony Miller: Well, Nick’s right. It’s a fantastic tool. And we’re a technology company. You can’t say, “don’t use technology,” that it’s not how we work. That’s not how we think. But it has to be used in the right way. So I’m also with Nick, scared it’s not the right word. Intentional. I think companies need to be intentional about how they use it.

And there’s solutions, right? Microsoft’s coming up with copilot and that’s gonna become available , to people. And that is gonna be a service you pay for, right? It’s something we have access to. And our data lives in a walled garden of sorts that just like the rest of our SaaS applications, it becomes the next SaaS application that we’re able to use and we can have some more confidence in it. I’m not gonna say all the confidence more than we have today, t hat’ll help. When we talk about things like ChatGPT or other available large language models that exist. A lot of your examples, Susan, were around content creation, whether that be pictures or text, your sales manager example. We wanna be able to use those without giving away all the stuff. ? And if we write our own policies and we are intentional about how we use it and we control where that data goes, we’re good. In those other models they’re free right now. I know there’s like a paid for version of ChatGPT, but you guys have all heard the concept that if you’re not paying for it, you are the product. And that’s not something I made up, but that is what it is today. All the users of ChatGPT, and other large language models today are training the model, that is what we’re doing and that’s fine.

I support what the creators of these products AI, is doing. But at the end of the day, if. If we’re just giving it to ’em and they’re just giving it to us, we’re the product, which means we gotta be careful. It’s data collection. It truly is data collect. As soon as Facebook enables all the content that everybody has been pumping into it for the last 18 years or whatever it’s been, They just provided all the language modeling Facebook would ever need.

It’s gonna come back with some really weird cat themed information, but that’s what they’ve done. Because they’ve been giving all that data to Facebook cuz nobody pays for Facebook, therefore they’re the product. And it’s because we gave ’em all the content. So we had to think about it that way.

And as a company, as a CIO, I have to be concerned about it, as you said. But that just means being intentional.

Susan Gosselin: And , certainly having, written policies, it’s almost having a written, bring your own device policy or whatever, it becomes part of your company training. And you have to say to people, look, you can’t just go out there loading company data and information into unsecured tools here.

Use this one if we have this one for you. And if you’re not to that point yet, then you just need to tell people you can’t be. Putting anything that is an internal only information into a tool like that. Right now, I’m gonna take a moment to get on my soapbox as a content generator and marketer. That’s what I do here at Integris.

And I got something to say, when it comes to the proper use of AI, I’ve been doing a ton of research around this. But there are some really intense issues of copyright going on when it comes to AI. So for instance, There’s a lot of creators out there that are creating novels and pictures and illustrations and, you’ve got people, for instance, that are, uploading their stock photography to, services like Shutterstock and Adobe Stock and all of that, right?

And now, you are able to create photos from scratch using. S some of these other, stock photography tools, Canva, a lot of them are all, you can just type in a prompt and it will digitally create a picture for you, which will include bits and pieces of pictures that are part of their stock photography library.

Now, a lot of these services are starting to get good and be smart about it, and they are offering a. Every time the AI trains upon a particular image or a particular piece of art or something like that, they are working out ways to do MyITPros payments to those creators, but not all of them are yet. Mid journey is a really big example of that, right?

There’s been a lot of brew, haha. You’ve got a lot of novelists out there that are creating these incredibly beautiful illustrated book covers, but they’re doing it trained on art that doesn’t belong to them. So it at, even though it is a completely new piece of art, they had to base it on something.

So if you do not do any research on the copyright implications of what you are doing with your AI, you could be in a world of hurting. The same thing goes with the taking what AI provides to you information wise. As an example, as gospel, you need, when you put in a prompt into AI, you need to ask it to show you its sources so you can go back and look to see where that information came from.

I’ll give you a g because it hallucinates, they talk about AI hallucinations all the time. And I’ll give you an example. I was. I was doing a presentation for the rest of the marketing department on artificial intelligence and what we should be looking into and what we shouldn’t. And I asked to provide me with a list of services and they gave me a list of AI services and didn’t include Mid Journey.

And I said, why isn’t Mid Journey on this list? And they said, it’s a fictional company. It is not. It is not, it’s real, but it’s sitting there telling me it’s fictional. You’ve gotta be like hyper aware of those kinds of things too. It’s not like doing a Google search where you immediately see where it came from.

It is interpreting that information for you. So that is my public service announcement today. I had to get that off of my chest. What do you all think? I’m gonna pass this around. Does anyone have anything that they would like to weigh in on about copyright and exposure for the company and all of that?

Copywrite Exposure

Tony Miller: Related to our company. Obviously content needs to be careful, right? We, as we’re making stuff our website can’t be written by other copyrighted information. I. So we must be careful. We have to worry about implications to that. You used Google as a reference, like it’s different.

Like having Google means I see what the sources are but as a user of Google forever you also know that sometimes the answer that even in the top two searches isn’t correct. Assuming it’s content you understand. An example, I’m not a camera guy. I don’t know about lenses and focal links. I know these words.

I don’t know what they mean. If I were to go ask chat g p t a bunch of stuff about which lens should I buy for my camera, it would gimme an answer and I would say, oh, okay. I’ll just go buy that lens. But I don’t know the difference. However, if I go ask it some information about networking servers systems, this is my world.

Chances are I’m going to. Find things as you did with that company. So at the end of the day, this means it’s a tool it’s a reference tool just like we used encyclopedias back in the day and now we use Google or Wikipedia. This is an iterative tool at this stage. It is an iterative tool that we can use. just like I have to worry about where Google got its data and even the blogger that I’m reading got his or her data, I also still have to be concerned about where chat G P T got its data and if it is even correct, if some blogger wrote about a bad camera lens chat, G P T used that as its source content.

And I don’t know the difference. I’m now acting off of bad data. Just if I read a blog that said, use this camera lens and I don’t know what I’m doing. I’m going to still buy the wrong lens. So it’s a tool. It’s a great tool. It’s really neat and interesting in this phase, but it’s not the answer it, the intelligence part.

There isn’t exactly what we think of when we start thinking about AI. This is a large language model like that needs to be clarified over and over again. And so it has value. The intelligence piece isn’t quite there yet, and it requires the humans, us, the users of these services to continue to use our human brains.

Cheesecake, cheesecake, CHEESECAKE!

Susan Gosselin: I’ve got a kind of a funny story about that. I have a, my youngest son, Has just graduated from culinary school. And I was talking to him about chat, G P T, and he went, wait, you mean you can ask it questions and it’ll tell you stuff? And I was like, yeah just go on. You can, there’s a free version.

Give it a try. So he decided he wanted to make a lemon cheesecake for a friend, and he typed in lemon cheesecake, and then he decided he wanted to do like a fancy garnish that included like a glassy sugar kind of thing. Now he knew, in his head he could probably do it from. Just off the top of his head, but he’s let me see what this is and I’m gonna try making this cheesecake.

So he made the cheesecake and it was pretty good. But he was looking at the sugar garnish, and it was telling him to put in three times the amount of baking soda then you should. And he was like, if I put this in here, This thing is gonna boil over and turn into a hard tack ball on the top of this stove.

So he typed in, Hey, Chachi, b t are you sure about that? Into the prompt. And they went back and took it a second and they went, oh my bad. That’s wrong. It’s supposed to be, this actual amount, but. He’s a chef and he knew what he was seeing. If you’re just, Susie Homemaker like me, like trying to make a cheesecake, you could have a problem.

So that has real implications for companies, at this point that the information that their employees are pulling in. If you’re using that for anything serious, you’ve just gotta be pounded into people’s head that they have to check, recheck, and check, anything that they’re pulling from these things.

But enough about that. So we know that AI hallucinates and does some crazy things, but what I’m wondering is how the use of AI needs to affect your policy setting. And documentation as a company. What, if you’re a company and you’re looking at buying into this new service from Microsoft, or, I know Azure has got some new walled garden tools they’re talking about.

There’s a lot of things, Amazon has got God, what’s the name of that thing? I can never remember it. But there’s like a, there’s several different. Walled gardens that are starting to come up. So if you’re a company and you’re seriously looking at using these tools and rolling them out system wide to your company, what are some of the implications?

What are some of the things that you need to be looking for before you do it? Pull the trigger on it and how is that gonna impact your policies and documentation? Tony, you wanna go first on that?

Making the right choice regarding AI tools

Tony Miller: It simply has to be done. Much like other, you use the, work from home or, bring your own device type policies. Policies must be written. We just had ours written. And it, you have to have the policies, the procedures, the what to do, what not to do, written down, trained and held accountable to that, that, that.

Is the reality of today’s world. If anybody has any compliance requirements, and even if they don’t, they should have these policies written. You don’t have to have policies or compliance policy requirements to write a bring your own device. You should write a bring your own device policy. And the same is true.

Now with this, what is basically a new technology to the world. Companies have to, again, be intentional, sit down, think it through, write it down, train and execute on it.

Susan Gosselin: What are you thinking about this, Darren?

Darrin Maggy: So completely agree with Tony. And essentially what I’ve been advising clients to do is. First of all, you have to address the policy. By that I’ve been, I’ve really been telling people begin by saying no. Just to slow it down, create some space, right? Say no get a focus group together, determine internally what that use case, right?

Get our free AI policy template for your business Your team is using AI tools like Copilot and ChatGPT to handle work. Make sure usage is ethical and secure with our free AI Acceptable Use Policy Template.  

What does that what does that look like? Who should be authorized? And then you could move forward with explicit authorizations to use it. Which can be wrapped up in again further policy and procedure.

Susan Gosselin: What say you Nick.

Nick McCourt: Like what Tony and Darren are saying, but I think I’ve probably been a little bit more aggressive because, as we’re having conversations, we’re already finding that employees are using it and They get two options. We either add to a current policy or we create an artificial intelligence use policy, and one of the first things that we discuss is, we’re not going to stop the employee right now from utilizing it for some things that they’re doing.

What we are going to do though, is prohibit and completely stop any. Submission of proprietary private, confidential type information into an open source, artificial intelligent unit. The policies usually add on that say, Hey in the event that we own or lease our own, then employees are allowed to submit information into that.

And that’s been a major focus to try to push. Some of these organizations a along a path where they are doing what Darren’s saying no, but this is no, don’t just use the open source thing, be smart about this. We are trying to, develop, grow and be stable as organizations. You can’t just throw all this data into something that we don’t control.

Susan Gosselin: Tony, here’s one of the questions that I’ve got. Because I have only. Been looking at it from a top line point of view. Do we know in these walled gardens, like how is that information stored and how is it backed up into the cloud? How does that d if the company goes down, is that part of a disaster recovery effort?

Do we even know at this point,

The Walled AI Garden and Data Recovery

Tony Miller: I don’t have all the answers. Before we would implement something like that, obviously, that, that has to be part of the vetting process of any service you get, whether it’s an email service or an AI service. You have to be vetting the where is my data? Both physically and virtually. And then deciding on what your acceptable risk is there.

Yes if at some point. AI becomes a significant part of our workflow, which, if you watch the news, they’ll tell you it’s going to be then, now it does indeed have to be part of our continuity planning. If we count on it, then it has to be today, that’s probably not the case. If we don’t even have, our own walled garden AI to work with, but if that day comes, when it comes Yes, I will have to make decisions on whether that’s part of our disaster recovery.

That data is most likely all gonna live in the cloud of some sort. These services are ne necessarily a cloud service. They’re a software application as a service type thing, because, No, very few companies are going to spin up the amount of power it would take to do this work. It would be very cost prohibitive.

And so it’s gonna be in the cloud. It people aren’t gonna be spinning ’em up on their own, but that’ll become a concern in maybe the near future, maybe the near distant future. I don’t know. This is still unfolding. This is new and it’s hot and I’m glad we’re talking about it, but it’s also there are a lot of unknowns today still.

Susan Gosselin: There is a, I think a lot of it. Just getting back to what we were saying before, has to do with the nature of what is it is being used for. So if you are using ChatGPT to write yourself a cover letter for a resume or a farewell letter when you’re leaving a job or that kind of thing, that’s fine.

A lot of the work that. We do in marketing. It wouldn’t matter if we were using an AI blog to video generator or anything like that because everything that we are producing for the public. There’s no, it’s gonna be out there anyway, the final piece is the final record.

The only thing that we have to worry about are accuracy and copyright. But you know when you’re someone who’s a salesperson that is putting together a big sales presentation for a key prospect, That’s going to have some proprietary information. If you’re a sale like one of our salesperson people that is, or, putting together the results of a IT assessment that we do for a customer, think about how sensitive that system information is.

So there’s just a lot to consider and educate people on, right? So I’m gonna, I’m gonna pump this question out to the whole group here. What are the dangers with AI that y’all think we haven’t thought of yet? And is there any way to lay the groundwork to prepare for those now? Nick, you got any thoughts on that?

Unforeseen Dangers

Nick McCourt: Loss, loss of intellectual property issues with using it to build out new encryption methods that aren’t necessarily approved. Using it in many malicious ways. And we’re seeing this on the news already, right? These attacks are happening, but. As you keep going the use of this is the sky is the limit.

Okay. And it’s not perfect. It’s not a hundred percent, but the sky truly is the limit. And so you can use this to pretty much eliminate some software development when it comes to, Hey, I want to do something malicious. I, you know what, I’m just gonna have this, write it for me. I don’t have to write this.

If you’re doing penetration testing, for example you’re doing it, usually you have an ethical hacker attacking an organization because they need to have that now, though. And remember those people, they put time in, right? They’re building out scripts. They’re really thinking it through.

You don’t have to do that. This becomes a really nice, easy button to push, and that’s just an example. There’s so many different things to worry about with it.

Susan Gosselin: Yeah. And what do you say about that, Darren?

Darrin Maggy: We definitely, we don’t know as much as we need to. So many questions. The primary concern really is the unintended consequences, right? And it’s, you can reach out and you can try to project and you can try to, determine what those are going to be. But obviously we have human limitations, but that’s why I really urge caution.

And just be very careful because even, in the best circumstances, we as humans have a tendency to something. So I’m just being very careful.

Susan Gosselin: Yeah, let’s say you, Tony.

Tony Miller: Nick’s ideas about the use for somebody doing penetration testing. It made me think of, we used to have the script kitty concept where just some person on the internet would go download some script to mess with their friend, or maybe actually do something significantly malicious in this world.

I don’t even have to go searching for that script. I don’t have to find it on some dark corner of the internet. I just asked the AI for it and. I have it and now I’m going after somebody again. Maybe it’s benign and I’m messing with my friend Nick, or maybe it’s significant and I am, being malicious towards another company or organization.

The barrier to entry. For attacks just became extremely low. But I don’t have to be an advanced programmer with advanced knowledge of vulnerabilities to now take advantage of those vulnerabilities, just ask the AI to do it for me, and now I’m executing on that. And of course, the bad guys are gonna have to be held accountable for that, but that’s gonna become very difficult and.

We’re gonna have a lot more bad guys because no barrier to entry.

Darrin Maggy: And to further the I guess my role as the AI curmudgeon on the call in fairness, the bad guys and girls out there have always had the benefit of time. To begin with, they’ve always had the benefit of time. So it’s just increased that it’s just made it.

Susan Gosselin: It’s okay. You’re allowed to speak, Darren. We’ll let you, we’ll let you

Tony Miller: Can I add to Darren’s? Cuz I think it’s a solid point about where AI is today of the internet. We can do so many things so fast, so convenient. This call. We’re in four states, I’m guessing, and we’re able to have this kind of conversation. 25 years ago, we, 10 years ago, we’d have to be in person.

It made things easy for us. It didn’t give us a thing we couldn’t do. We could do this, we could fly in person, we could sit down and have four cameras and we could make a really great video about any topic. But the new technology’s made this very easy for us. And I think that’s some of your point, Darren, is it is, it’s made this easy for people.

This is just a new tool that makes your existing tasks. Much easier. You can look at that from the cybersecurity point. And that’s bad for us. We don’t like that. But you can also look at it from the productivity side. We’re gonna love this. And people are gonna be able to further their careers, have new careers, do all sorts of things, execute on a whole lot more tasks.

It’s great. And these are the things we’re weighing and why I think. Darren’s take of caution because there’s two sides of this coin. Caution, intentionality, like these are all important.

Susan Gosselin: So let’s summarize all this now, I’m hearing a lot of really erudite points as usual from you gentlemen about, what you need to be starting to think about. So what I’m hearing is that, if you’re going to implement it, you need to have a steering committee that looks at every part of it.

You need to have policies that are, centered and training employees about what they can and can’t do with it. And then you’ve got to have a plan if you are going to be using it in any extensive kind of way with proprietary materials, the walled garden. Getting back to what Tony was saying are there any other pieces of advice that you can give everybody?

Listening today about, just some of these introductory things we need to do to introduce our companies to AI.

Too early to tell…

Darrin Maggy: With precious little data, historical data, look at it’s extremely difficult to risk assess.

Susan Gosselin: Yeah.

Tony Nick. You got anything on this?

Nick McCourt: Think that If you strategize and you proceed with caution, everybody needs to experiment a little bit to move forward. And so if you can test out the lab rats without actually getting bitten by the radioactive lab rats, that would probably be best. So yeah.

Tony Miller: And I think companies should be testing, like Nick said, and if we take Darren’s cautious approach, they need to be extremely careful about what information they’re testing. With your son’s example of a cheesecake, Great. That’s fun. That’s exciting. It’s neat to play with the new technology.

This is how we learn how to use a tool and now we can, in the future, as we’re cautious, intentional and learning, we can then apply our policies to our workflows in a way that is, Is correct. And then we can start to use real data, like data that is sensitive and might be client data, might be our own personal data.

We can then start to implement things like that. So let’s keep playing with the cheesecake recipes for now. And we’ll get there. Like I said, this is evolving and we’re gonna, we’re gonna get there. It’s gonna be fast. This isn’t gonna be a slow process, I don’t believe. But.

Start learning with safe things.

Susan Gosselin: Yep. I would just say particularly when it comes to coding, it sounds to me like there probably needs to be some kind of moratorium on using AI to write your code. In general, like the, you’ve got an IT department that’s busy that needs to run, a new script for this or that.

It could be really tempting to use AI. Is that something that you guys would prohibit?

Nick McCourt: It’s a specific type of camera lens. Tony, you’ll have to look that up using ChatGPT but the short version is we want people that are in a support. System where they are trying to help out organizations. We want them to actually provide safe, functional answers to the issues going on.

And so if you’re going to input or use something, do not input specific information, use general requests, and that’s okay. By the way. You still get a lot information by putting in a general request that helps you along to what you need to do if you start implementing, Specific requests in that have to do with said client, you’re actually giving away that organization’s information and that’s not what you’re supposed to doing.

Susan Gosselin: Nope. Nope. Sounds good to me. All right, gentlemen. Has anybody got any other advice where I close this. Episode of anybody got anything else?

Nick McCourt: I cheesecake.

Signing off…

Susan Gosselin: I know. I’m like go to Cheesecake Factory. All right. Don’t ask ChatGPT to write your cheesecake recipes. That’s just, that’s my advice to y’all today, my pearls of wisdom. Okay. With that, I am going to close out this episode of the Cybersecurity Crowd. Join us next month. We’re gonna be talking about all things breaking and things not breaking, and things you need to know about cybersecurity.

Come and see us.

Keep reading

Everything You Need to Know About Microsoft Copilot

Everything You Need to Know About Microsoft Copilot

Microsoft Copilot is easily one of the most hotly anticipated tech advancements to your desktop in decades. Does that sound like a bold statement? I assure you, it's not. For companies that buy into Microsoft's new, proprietary AI engine, Copilot will be woven...

Four Social Engineering Hacks You Need to Prevent in 2024

Four Social Engineering Hacks You Need to Prevent in 2024

The Anti-Phishing Working Group (APWG) reports over 963,000 unique phishing sites worldwide were detected in the first quarter of 2024, collectively sending out billions of spam emails a day. Is this number scary? You bet. But it's the growing sophistication of these...

Updating Your Bank’s Security Training for the Age of AI

Updating Your Bank’s Security Training for the Age of AI

How much could AI-driven models like Copilot for M365, Google Gemini, or Apple Intelligence improve the productivity at your bank? The jury is still out on that one, but initial experiments place the overall AI-driven productivity gains for the US economy at between 8...