Find your portal
cta-2024-g2-award

CYPHER Learning named a G2 Best Software Awards winner 2024
Read more

AI Research and Q&A

Latest: Generative AI in 2024 - a potential lifeline amid workplace turbulence

header-ai

Your AI questions, answered

Looking for clear-cut answers to your AI questions? Straightforward answers empower you to forge the most informed choices for your organization. We are committed to offering you authoritative insights and impartial analysis.

Graham Glass, CEO and Founder of CYPHER Learning, provides his candid responses to AI questions.

Will AI replace teachers?

Check out Graham’s latest answer to the question: “Will AI replace teachers?”

or

ask-graham-video-cover
X
How do I protect my data if I'm sharing it with an AI?

or read the answer:

The answer really depends on how you are sharing that data.

So for example, OpenAI is very upfront that if you're using the free version of ChatGPT and you're copying and pasting your data into its buffer, so to speak, then they are going to train OpenAI using that data. So you don't want to share important proprietary data with a free version of ChatGPT.

That being said, they're also upfront that if you're using the enterprise version of ChatGPT or if you are sharing data via their APIs, it is not used for training, it's not stored anywhere, it's just transient and will be destroyed.

At CYPHER Learning, we interface with AIs through these APIs and we only do that when the vendor and the owner of the API says this data is completely private. So if you're using a system like the CYPHER platform, which is using the APIs, you do not have to have any concern about your private data being shared with anyone.

or read the answer:

I think there are some areas of your business that could immediately take advantage of AI and it's not introducing any existential risk to your business. And there are other areas, especially one subject to high degrees of government regulation, where you might need to be careful.

But the other thing is you have to remember, your competitors might not hesitate in certain key areas and the longer that you take to apply AI, the more and more of a disadvantage you're going to have.

For example with e-learning, if you are building courses 100 times faster than and at a 10th of the cost of your competitors, is that going to give you a strategic advantage?

If you are a training organization or an IT organization, the answer is almost certainly, yes, you would be at a disadvantage.

or read the answer:

Our belief is that you should be able to create state of the art, engaging, full-featured courses in minutes using an AI and you should not have to know anything whatsoever about AI in order to do that.

So for example, in our system, using AI 360 with CYPHER Copilot, you just say build me a course and you can specify exactly what you want using simple checkboxes and drop down menus i.e.

  • I want a professional tone of voice
  • I want to have an automatic voiceover
  • I want it to generate images synthetically
  • I want automated questions
  • I want group projects
  • I want rubrics
  • I want a study guide
  • I want a glossary
  • I want gamification
  • I want competency-based learning

Then just click that button and a few minutes later, it will do absolutely everything and not only can it do that, but you can drag and drop a PDF in, or a video in, or a Microsoft document, and it will generate all of those courses inspired by the materials in that private information.

None of our competitors do this at all. 

They might be able to generate a question bank, they might be able to generate a small course outline, they might have a little tool that you can invoke ChatGPT from your learning platform, but you still have a huge amount of work to do to get to the point that you have a decent course. But we've figured out a way to capture that AI, make it super easy, super efficient. No compromise.

or read the answer:

The first thing is that the accuracy of AI is rapidly increasing. We've all heard of this so-called hallucination effect where sometimes it gets the answer wrong or it just makes up an answer, which sounds very convincing, but it's in fact incorrect. Which by the way is what humans also do. So let's also acknowledge the fact that this is something that's not unique to AIs - it's really a general concept with any information source. That being said, GPT 3.5 as an example, does it more often than GPT 4 and OpenAI has stated publicly that they believe that the future iterations are going to get more and more accurate. So the first thing is that I do think the error rate of AI is going to go down dramatically over the next year or so.

The second thing is, let's face it, we don't want to be silly. Just because an AI gives it to you doesn't mean that it's perfect. So generally speaking, you're going to apply the same kind of critical thinking and cross checking with information from an AI, that you would also perform whether it was coming from the web or from another human being. So basically put on your critical thinking hat.

In the CYPHER Learning platform, when we're creating educational content using AI 360 with Copilot, we are very honest and upfront and say AIs do sometimes make mistakes, you have to review the information. So we're not trying to sell someone that everything that you do using an AI is perfect.

We are, however, doing some things to reduce the probability of a hallucination and also making it easier for human reviewers to find possible errors.

So something called AI cross check, which will be coming out fairly soon from CYPHER Learning will use more than one AI and it will use one AI to precheck the accuracy of information coming out from the other AI. Once again, that doesn't guarantee that it's going to be 100% accurate, but it will tend to minimize that.

So I think generally speaking AIs are just like any other information source. You've got to be careful, you've got to check them, but they are going to get more and more accurate quite quickly.

or read the answer:

99% of the time, no, but 1% of the time, maybe.

A little bit of a review on how AIs actually work. AIs do not copy and paste all the content in the internet into their digital brain and then regurgitate it. What they do is review huge amounts of content, extract the essence of that content, then codify it in terms of numbers.

So the AI almost certainly could never actually come up with exactly what it's read in the first place because that's not how it stores information. And much like a human being, it generates fresh information from everything that it's seen. And anyone who has used something like ChatGPT knows that every time you ask it a question, it gives you a slightly different answer. And that is one example of how it is not copying and pasting.

One of the things that OpenAI and Microsoft has done, is they've announced something called "Copyright Shield", which underscores their confidence and commitment to protecting the output of AI against any kind of copyright issues. So if you're using one of their products and someone comes after you and says, you violated a copyright, then Microsoft or OpenAI will actually come to your legal defense, which is a really great move. So generally, you can use the output in confidence, knowing that it's not copying and pasting from any particular source. And that covers 99% of the cases.

The 1% of the cases, that I think is worth acknowledging, is that there are some content creators, especially artists, that have spent years crafting a very, very specific, very unique identity. Picasso is an obvious example from the past, but there are current people doing this. And if an image generation AI learned all about Picasso and then started generating 100,000 Picassos using his specific, crafted, identity, then that almost certainly would be a case for a copyright violation.

So there are very, very few vertical cases where you do need to be careful, but in 99% of them minimum, you don't have to worry at all.

or read the answer:

We currently use 5 different AIs and over the next month or so, that's probably going to grow to 7 or 8 AIs.

The general idea is that we pick AIs based on what they're best at, how fast they are, and how much they cost. So we use one AI, for example, for image synthesis, another AI for voiceover, another AI for video transcription, another AI for content generation.

But the nice thing about our system is that none of this complexity surfaces to the user. You just click the checkbox, I want to synthesize images. And by hiding this from the user that gives us the flexibility to replace these AIs at any point when something better becomes available.

So obviously, we commit to our end customers that all their private data is completely private. So we're not going to share it with anyone. We strive very hard to make our systems work very fast and reliably. But aside from that, there's really no need for anyone to know what AIs we're using behind the scenes.

or read the answer:

An AI prompt is basically just something that you say to an AI and depending on how you ask the question, you might get a different kind of response. Within our engineering team, we've got a lot of expertise now in how to ask the question in exactly the right way to get the best response back.

You don't have to know anything about AI prompts when you're using the CYPHER platform because we package the AIs, we hide all the nasty details from you. All you need to do is to check a few boxes, select a few menu options, and say Go.

Another question is, why can't I just do exactly what you're doing using AI prompts? And the answer is you actually could as long as you don't mind working about 100 times slower than what the CYPHER Learning platform can do for you. So to give you an idea, when you build a course on our platform, we trigger a minimum of 200 prompts, many of which are operating in parallel at superhuman speed. Then we take the results for all of those prompts and through a tight integration, we will generate the modules, the pages, the assessments, the quizzes, and the glossaries at superhuman speed.

If you were trying to do this with ChatGPT, what we do in 10 minutes would probably take you around 10 to 20 hours and every single time you want to repeat it, you'd have to spend another 10 to 20 hours doing that.

So, yes, you could do it. But I don't think it's a good use of your time.

or read the answer:

Overall I think it makes complete sense. I don't think there's anything weird in there. To give you an idea of the kind of things that they're thinking about, one of them is privacy. So if you're going to use an AI, you want to be assured that it's not going to take all of your data and share it with other people. So that makes good sense.

Another thing is related to national security. The government is going to work with key vendors such as OpenAI and Microsoft to try and make it as difficult as possible to use an advanced AI for nefarious purposes - such as building a biological agent or blowing up a nuclear reactor.

Another thing which is very important is there's no question that AIs are going to get used more and more for calculating algorithmically various outcomes which can affect your livelihood. So for example, you might be applying for credit, you might be applying for housing and we want to make sure that the AIs do not either directly or indirectly discriminate against any one particular audience, thereby putting them at a disadvantage.

And the last one is related to jobs. It's quite obvious that AI is going to displace a whole bunch of jobs over the next few decades, but ideally, people in those particular jobs can transition through a process of upskilling to other jobs. And one of the things that the US government wants to do is to try and make sure that that does not happen too abruptly, to give people in those jobs plenty of time to transition to other jobs.

So overall, I view the US Bill of Rights for AI as being very sensible and I don't see any particular issues with it.

or read the answer:

If you look at how people typically use learning platforms, it's to log in and then take a traditional course which is typically broken up into modules and sections. If you're lucky, it's gamified and if you're super lucky, it will also track your competencies. And that is an obvious area that AI can assist.

So specifically, AI can allow an instructor, professor, or a teacher to automatically create amazingly sophisticated courses in as little as 10 minutes. They're not gonna do 100% of the job, but they will do at least 80% of the job. But that's a fairly straightforward answer.

I think there's a much more interesting answer looking beyond the way that people have traditionally learnt using platforms. So for example, a lot of people have something they want to learn right now, something that's very important, mission critical, they don't have time or the inclination to try rummage around and find a course in your jukebox of courses and take it.

So one of the cool things that AI can do is that you can use generative AI. So you can learn something right there, on demand in an easy to use consumable format without having to take a course. And this is something that I think of as being kind of instant learning or just in time learning if you like.

I think the more profound thing that's going to occur as well, is that these platforms pretty soon are going to have personal agents. A personal agent is going to be a thing running 24/7 in the background, trying to find ways to help you to accomplish your learning objectives. They might be recommending you content, they might be automatically noticing useful news articles and summarizing them for you, they might be automatically following a week after you've consumed some material and giving you a little refresher. This is something which I think is going to revolutionize the way that learning platforms work. So stay tuned and let's see if this all comes true.

or read the answer:

The short answer is no, why do I feel this? I used to be a teacher. I was an educator. I used to teach Computer Science at the University of Texas at Dallas. People really liked my classes and one of the reasons is the energy, enthusiasm, and anecdotes that I brought to the classroom, which then got everyone really excited about learning and put them in the right frame of mind. And obviously I delivered the core materials, but there was so much more that I did. I did lots of really cool hands-on projects - let's build a neural network simulator. And so I always got great reviews and people really enjoyed my classes and they still send me emails randomly today.

So wouldn't AI have done that? I don't think so. However, if I had gone into a lecture and all I said was "Ok everybody, turn to page 15. Now let's all read together", yeah, I could have probably been replaced by an AI if I was that bad and that boring.

But I do like to think that a majority of educators actually do a lot more. And the most important thing that you can do is to inspire and motivate your students.

X
Ask Graham!

Got a question about AI? Ask it here and we'll see what Graham says!