Safety Services New Brunswick

Artificial Intelligence & Occupational Health Safety - Dr. Matthew Hallowell, PhD

Dr. Matthew Hallowell, Phd, Founds & ED of the Construction Safety Research Alliance Season 2 Episode 24

Send us an e-mail to podcast@ssnb.ca

“Tune in to this fascinating discussion with Dr. Hallowell about the benefits of artificial intelligence and how it is being applied to occupational health & safety.  It’s a game changer... “

Dr. Hallowell is founder & ED of the Construction Safety Research Alliance and is an endowed Prof. of Construction Engineering at University of Colorado at Boulder, and is the ED of Safety Function, a firm that helps businesses translate research to practice. His research, teaching, and consulting focuses on the prevention of serious injuries and fatalities in the workplace.

Perley Brewer (Guest)   0:30
 Welcome to today's podcast. My name is Pearlie Brewer and I will be your host. Today's podcast gives guest is Doctor Matthew Hallowell. Doctor Hallowell is president teaching scholar and endowed professor of construction engineering at the University of Colorado at Boulder. He earned his BS and Ms in civil engineering and APHD with a focus on construction, engineering and Occupational Safety and health before his academic career, he worked in construction as a labourer, project engineer and quality inspector.
 Hallowell specialises in construction safety research with an emphasis on the science of safety. He has published over 100 peer reviewed journal articles on energy based hazard recognition safety, leading indicators, safety risk assessment, predictive analytics and precursor analysis. For his research, he has received the National Science Foundation Career Award, the inaugural Thomas F Farrell, the second safety Leadership and Innovation Award in the Construction Industry Institute, Outstanding Researcher Award.
 Quite the achievements Doctor Hallowell, thank you very much for being with us today.


 Matthew Ryan Hallowell   
1:40
 Yeah. Thanks for having me Pearly.
Perley Brewer (Guest)   
1:42
 Just and and the reason I went to the sort of your background is is to give folks a sense of of your background because it's certainly a again very, very impressive.
 You've mentioned a a number of peer reviewed journal articles that that you've written over the last few years, which I know is is big in the university environment, maybe just to sort of set the tone before we get into artificial intelligence. Do you have two or three of those that maybe you could talk about what you what you researched?
Matthew Ryan Hallowell   
2:09
 Yeah, sure. I'm the the research is all focused in Occupational Safety. That's my my area. But one of the cool things about safety research is that it is a multifaceted problem. So there's a lot to learn in terms of, you know, numerical things like risk analysis or even psychological things, like how people recognise hazards, evaluate risk and make decisions. So my research portfolio is quite broad, and I get to work from everybody with everybody, from folks in the Business School to.
 Folks in psychology and neuroscience.
 And kind of my specialty in there is applying those scientific concepts to the problem of worker safety. So I think we have big opportunities to make scientific advancements in the field and and safety's a very interesting.
 Area to research. Because of this multifaceted nature, it keeps me on my toes and I get to study a lot of things. Probably the the research that I'm best known for in the professional arena is on hazard recognition and the use of the concepts of energy.
 To help people see more hazards, identify which hazards have that life threatening potential and how to control that energy to save lives so you know, working on things like the.
 To help.
 You know, recognise types of hazards that they would normally overlook by their natural biological limitation. That's one one example of of some of the work I've done and then another more recent piece of work is is extending that theory and.
 Using the concept of energy to help us identify which hazards have that life threatening potential. So for example, we found that the human body can withstand about 1500 joules of energy. To get precise with you, and over that limit, that's when a a life ending life threatening or life altering outcome becomes most likely not just possible any longer, but most likely. So I try to take.
 You know, kind of some of these old problems that we've had in the industry.
 Long time and and put a you know some newer research to it and bring some knowledge we have from physics and medicine and psychology and business and so forth into our domain.
Perley Brewer (Guest)   
4:34
 So let's dive into our main topic then artificial intelligence. Why did you start researching this topic?
Matthew Ryan Hallowell   
4:41
 Yeah. So AI, most people actually that I know in the industry don't really know that I study AI, it's it's in vogue right now. So everybody and their brother seems to be studying AI or developing little AI widgets.
 But my work in AI actually started at the very beginning of my time as a professor here at the University of Colorado 16 years ago. My PhD work was in risk assessment, so I was getting into the quantitative side of.
 Safety and as I was a new academic and and looking at, you know, upping the rigour of my research and writing, you know, proposals to the National Science Foundation, for example, that's where I started to delve into more sophisticated methods in collaboration with some of my now close colleagues and friends who did a lot more advanced data analysis than we typically see in the safety domain. So for example.
 A colleague of mine, Balaji Rajagopalan, he's a climate researcher. Uh, he's in our water resources group. And so he and I collaborated to start to bring some of those concepts, you know, of more sophisticated analysis into safety as it applies to making predictions and assessing risk. So I I really started getting into the domain of.
 AI for safety.
 Back in, you know, just before 2010, that's where I was doing my my initial work.
 Built from there, I had a couple of fantastic PhD students, one of whom I still work with Antoine Tisha. He and I are working on AI together now from a business standpoint, and so the trajectory has always been bubbling under the surface. It's been a quiet part of my research, so the academics were were very interested in this. Academic researchers were were interested in AI, the industry I think, was still trying to make sense of what AI was for the last 15 years. And and then all of a sudden.
 In the last couple of years kind of post COVID.
 ChatGPT and other forms of generative AI becoming popular in the you know the the common you know you everyday use all of a sudden.
 The interest in AI has exploded, and so the research that we've been doing for a long time is now in the spotlight, even though that's not necessarily the kind of research that we were known for traditionally in the professional domain.
Perley Brewer (Guest)   
7:13
 So what exactly is AII know we hear the term used so much but, but what is it from a point of view of your perspective?
Matthew Ryan Hallowell   
7:23
 Yeah. So A is a, it's a really broad term. Sometimes we use it to describe all sorts of different things. I think you know a lot of folks think of AI as you know, Terminator and some of these other things that we had seen for a long time.
 The I guess the best definition I can give you is AI is the simulation or replication of human intelligence in a computer, so it could be something, you know, we use our senses, for example, use our sight to be able to make sense of visual patterns in our environment. That's how we navigate our world, right?
 We might listen for things and kind of take the vibrations in the air and convert it into text in our brain and then understand it and communicate those back through other vibrations, right?
 Even you know, making again predictions using the information in our environment to to, you know, take that all in, make a prediction of what might happen if we take a particular action and and make a decision.
 The the cool thing is the AI has these different facets. So.
 There isn't just one type of AI. It could range from what we'll call computer vision. That's essentially teaching computers to use a camera and see their environment, right, make sense of the things that are around you. They're good common examples of that. You know, self driving cars are an example. They're using visual input and making sense of that and help using it to make decisions. If you drive through the tollway and and you use that licence plate toll.
 Uh option. It's reading your licence plate using the computer just like a set of human eyes would have had to do in the past. So those are couple of basic, you know, driving examples of of computer vision. We're putting that into robots and other things. And when I say we, I'm not doing that, but people are doing that so that they can navigate their world. But a is more than than just that it could be voice to text that you might have on your phone.
 That's AI taking, you know, the the speech vibrations and converting it into text. And then you have natural language processing which is taking that text and and making sense of the patterns, essentially reading it right. You can hear it. But then do you read and understand it? So it's my long way of answering, you know, a good question, a good simple question is that, you know, I think of AI as, as you know, the tool, the technological tools we have that computers, you know, run.
 That help replicate human intelligence, and sometimes they do a better job than we do and sometimes worse.
Perley Brewer (Guest)   
10:08
 So you've been at this for awhile, what have you come up with in regards to use of artificial intelligence as it relates to occupational health and safety?
Matthew Ryan Hallowell   
10:19
 Oh, it's a it's a good question. A a lot. I'll, I'll tell you where I started, because I think that might be the most, you know, the best place to start anyhow.
 So back in 2010, I I was looking at using different types of information to make predictions of what might happen on job sites as it relates to safety. For example, you know, if we've got a work situation, what's the most likely severity of the injury, what type of body part would be injured, what energy source might be involved in that and that injury and so on. OK. So.
 Wanted to take information and be able to make predictions that would tell us, you know, forgiven work scenarios. Which one's the riskiest and why and what might happen. So in the past we did that based on tasks we'd say, all right, this particular task like working on a working on a roof, installing solar panels. All right, let's just take that as an example.
 And we just say, well, what's the risk of installing solar panels? But what we realise is, you know, this makes intuitive sense. It's not just about the task name, it's really about the.
 The context that people are in so are, is it snowing or is it a nice sunny day? Are you working a sloped roof or flat roof? Is the system energised or de energised? All of those things really drive the the you know how risky a job operation is, not just the name of the task. So what we were looking at is saying oh, can we take a different approach at this? Can we use some of those? You know, genome approaches like can we break work down into its fundamental pieces like.
 You know, working surface and weather and the types of materials and the hand tools we're using and so forth. Instead of labelling by task and what we found is we could do that and we could use injury reports and observations to get to gather that data 'cause they usually have, you know, are using a ladder or, you know, they'll have that information written in text. But here's the problem. It takes a really long time to read all of that. And that's a horribly boring task. Nobody wants to do it so.
 We needed a more sophisticated way to pour over hundreds of thousands of these injury reports and observations that we had, and that's where we use natural language processing. So instead of having a person read a million reports, literally a million reports, what we would do is read some and write a set of linguistic rules that say when those features of the work are present. So ladder's a good example. 'cause, you know, ladder is kind of obvious. We use the term ladder, but.
 Other things are a little more subtle like.
 You know, if I'm using, you know you have to kind of infer from the text what's going on. So we essentially you train the computer by writing all of these rules that would help people would help the computer to read for us. So instead of us identifying all these attributes, the computer would do it for us. And we would check it and then tune the model and make it better until it was outperforming, even us. And so that was our first foray. So now, for example.
 I don't need to read an injury report or an observation. I can use the natural language processing tool to read all of it. So you could imagine from a a business perspective that's good because companies are having, you know, 10s of hundreds of thousands of these and nobody has the time to read them all and make sense of it. But a computer does and it can do it in seconds. So that's a good example of of AI and where we started is training the computer to do a human task of reading that we don't have time to do. We don't want to do and we're really not good at doing it.
 Umm. And so from there we, you know, we could we had a quantitative data set and then we could make predictions using you know machine learning and some other techniques more recently we've been getting into computer vision and training a computer to take a photograph or a video and automatically recognising all the energy sources hazards, if you will, in the work environment and assessing the extent to which those energy sources are controlled or uncontrolled. So you could imagine the future being as these models.
 Mature.
 That you could have an eye in the sky if you will. Or use cameras on equipment or what have you to alert people when there's a change in a hazard or there's a missing control or what have you. Give us information because we don't have eyes on the back of our head. We can't do our work and look around us at all times. So.
 You know, that's kind of what where we came from, where we're going, but AI is is something that I think is going to be a collaborator for humans in the future.
 In the near future.
Perley Brewer (Guest)   
14:56
 Now, have you had the opportunity to test this out in the actual workplace?
Matthew Ryan Hallowell   
15:02
 Oh yeah, I mean, so all of the work that we do on the AI side now is done in collaboration with industry so early, you know, it's like any other academic. I'm boiling up ideas in my office at the university, but we have an AI Council, a safety AI Council. There's a group of companies that work with us to ask good questions, you know, drive what direction we should be going in terms of what sort of AI tools are needed.
Perley Brewer (Guest)   
15:22
 Good.
Matthew Ryan Hallowell   
15:32
 And then they go and try it out. Pilot it, we see what works, what doesn't work, and we keep making the models better. So everything we're doing now is is collaborative in nature. I'll give you an example. Since I mentioned natural language processing in detail, another application, we use it in is to make sense of a pre job safety brief where toolbox talk or tailgate talk, whatever. Whatever you call it, the job, the meeting that happens before the work begins.
 And right now the old way.
 The still existing way of of doing that is to have a meeting and then somebody has to fill out all that paperwork which who wants to do that right? The the real purpose of the meeting is to have a good meeting, right. This paperwork's a byproduct. Not the purpose of the meeting. So we use natural language processing or we use voice to text, to listen to the meeting. And we use natural language processing to take the meeting and automatically complete the paperwork so we can save big companies 100,000 hours a year, you know, easy.
 So it's just a, you know, just another example, but it's all happening in an in industry because otherwise we produce stuff we don't know if it works. And so the first half of of developing AI is creating it. The second-half is testing it and sometimes people don't do the second-half all that well. They just create stuff and say buy it. But that's not the best.
Perley Brewer (Guest)   
16:55
 Well, so.
 So who all what kinds of companies are in this group that you just talked about? Not not individual names, but just companies, kinds of companies, industries.
Matthew Ryan Hallowell   
17:02
 Yeah, so these are.
 Yeah. So for sure, I I'm happy to. Happy to describe that. Umm, we have an interesting combination of clients and contractors which you know sometimes you'll have organisations that are client only or or or contractor only. This is a a combination in collaboration of clients and contractors. So we have representation from oil and gas especially midstream oil and gas pipeline types of projects. We have commercial building represented heavy civil construction.
 Umm, electrocute utilities? Electric and gas utilities? Uh. So we hit the kind of the some of the big players in in construction. That's our main focus construction as it's broadly defined. But both clients and contractors side of things and that's nice because you get both perspectives boiled into one model.
Perley Brewer (Guest)   
17:58
 Using this AI, what have they discovered that they find helps them the most?
Matthew Ryan Hallowell   
18:07
 I think this is really important is that now that we're kind of in this brave new world of of using AI is that we're in a an interesting time where the AI tools are working really well, far better than than they have in the past, especially with this new generative AI like ChatGPT, all forms of AI have accelerated.
 The challenge is that.
 Most businesses don't know how to use it.
 So you get this shiny widget. It does really cool things, and then it's all about how you use it, right? So I'll give you an example.
 If we were to, if you and I were to go out into a onto a job site and we were to look at a particular task and say, all right, our job from a safety perspective, let's look at the task, identify the hazards and assess whether or not they're post hazards are controlled or not. Pretty basic, most important thing we'd probably do in a safety walk.
Perley Brewer (Guest)   
19:07
 Yep.
Matthew Ryan Hallowell   
19:10
 If I were to use AI to do that scan for us, it would find some stuff. It'd make some mistakes, it would find some things that aren't there. Those are false alarms. It would. It would miss some things that are there, just like a person would, right? So a person's not perfect. Neither is the AI. And what would happen if we were to do it that way? Is it would narrow our focus down to what the tool identified and we would. That would kind of limit what we were looking at.
 And then we kind of are trying to infuse our intelligence in there too.
 If we were to switch it though, instead of using the AI first, we use the AI second. So instead of, you know, pulling out the phone or the tablet, you and I were to identify all the hazards that we could find that's opening the aperture, if you will, to everything that we could identify. Now that's not going to be everything we're going to miss stuff, right? We know that we've learned that from lots of field testing.
 Then after we've identified everything we can, then we pull out the phone or tablet, take a picture, or use a video and see what the AI identifies to see if it found something that we missed.
 Then the sum is better. Right now we've expanded and we've opened up instead of shrinking down. So you always want to take the human intelligence first, then the artificial intelligence and not the other way around. And we find his work teams are much more effective when they use AI as a collaborator to check their work and less effective than they were before. If they try to use AI to do their work for them.
 So AI in this particular example is a great way of checking and augmenting human performance.
Perley Brewer (Guest)   
20:49
 OK.
Matthew Ryan Hallowell   
20:58
 Not replacing it, and I think that that's the thing. Biggest thing we've learned is you can create these great tools. They'll never be perfect and they should always be used as our collaborator, but not our replacement unless you're looking at something that is extremely good at doing and the task like reading a million injury reports doesn't really require that human.
 Component, right? It's it's a. It's a more rote task. So that's our biggest lesson learned is about how the tools are used.
Perley Brewer (Guest)   
21:31
 Now you mentioned something a moment ago and that is, you know, a lot of these.
 AI groups now that talk about whether it's chat, GDP, whatever. You know, they they say use it. Where does a person go? Where what would you recommend a person to that wants to learn how to use AI and how to use some of these applications?
Matthew Ryan Hallowell   
21:56
 Yeah, there are.
Perley Brewer (Guest)   
21:57
 Our education institutions, for example, getting into this.
Matthew Ryan Hallowell   
22:01
 There are. I mean this is more of a computer science than a safety topic, so it is very difficult to know where to start.
 There are. Let's take ChatGPT as an example. So open AI creates this model.
 It's far outperforms anything that anybody and then anybody's wildest dreams at a large language model would do. And then pretty soon after, people were looking at how to use it.
 At this stage I don't know that the place to start would be a, you know, a university course or anything like that. I I frankly I don't know enough about what's available out there even though I am a professor. That's a different discipline than my own. I'm an engineer construction engineering, not in computer science, but I found that there are some really good podcasts out there which coincidentally were on a podcast. But there's a really good one. What people will lead that, you know, describe the use cases and.
 You know what's worked for them? What doesn't work for them?
 I think there's one called ChatGPT experiment that was one that that resonated with me.
 And I I think that it's that's an interesting way to start.
 The other thing is what's kind of wild about ChatGPT is that you can ask it. Sometimes people are. I got this, I got the prompt window. It's like I'm chatting with it. Where do I start? Start by asking it what it can do. Tell it what you want it to do and ask it how to use it. So you know, that's kind of a wild thing. Like normally we think of AI as like you got to be an expert user, but you can learn to use it from the AI itself.
 So just say tell me what you can do for me.
Perley Brewer (Guest)   
23:42
 Now.
 Now for someone that wants.
 To find out more about the kind of research you're doing and to get involved in it or to find out what might be out there, that might be helpful to their organisation. What would you recommend?
Matthew Ryan Hallowell   
24:02
 So I lead two major research groups. One is the the safety AI Council. That's a really good place to start because you know, if if you really are on this trajectory trying to look at AI, you want to see what what's responsible use of AI, where is it going? How are other people using AI and successful ways in business for safety. This Council is a really good opportunity because that's really what it's all about is sharing ideas.
 Providing direction of wouldn't it be cool if we could do this or that and then the development team on my side goes and creates proof of concept tools that you can try and all of that's a really good way to learn and contribute and start getting involved.
 That's that's how we do our AI research now, and it is all in, like I said, in collaboration with industry from the university side of things, I run a very large Research Institute called the Construction Safety Research Alliance, the CSRA.
 And the CSRA is a collaboration of 100 about 110 different companies and we do research on all sorts of things from how do we identify and manage last minute change during work to how do we declutter safety and get rid of some things.
 Those teams work on other types of research, and you know that's a good, really good organisation to follow because all of the research that that group does our group.
 Has made freely available to members or non members, so everything that's produced is made available for free, so the the CSRA if you look up the CSRA that's a a treasure trove of knowledge. So it kind of depends on what you're interested in. My life is kind of split into two. The research alliance is at the university, the AI Council is something outside of the university. Both are R&D focused where the university work is more fundamental research and the AI work is more applied of course.
 Development, R&D. So yeah, those are the two facets, the two ways you can follow umm the the Csra's Knowledge Centre is a good place to start. You can tend the summit virtually for free. There's all sorts of opportunities in both of those areas.
Perley Brewer (Guest)   
26:17
 So what's next for you in the area of AI? What are you looking at? Say the next year to even three years down the road?
Matthew Ryan Hallowell   
26:21
 Hmm.
 Well, my colleague Antoine, he focuses on the development of the new technology. So the where most of our effort is going into is is in this new tool called chat safety AI that packages together the computer vision, the.
 Predictive models we've worked on for a decade.
 The the indexing of all sorts of regulations and other things into one tool. So he's focusing on how to make all of those tools more robust and easier for people to use. And my contribution now is largely focused on how to use AI.
 So I'm I'm not the programmer. I provide more direction to what should we be studying and advancing, but my my work now is really on training and workshops and working with industry to, you know, determine and test what's the best way to use these new tools to create the outcomes that we're looking for and where are the potential traps where we could be using AI and ways that are counter productive or potentially even unethical without even knowing it. So I focus on use my colleague Antoine focuses on development.
Perley Brewer (Guest)   
27:41
 So your thoughts on AI to date and where you see it going as far as potential and as far as liabilities?
Matthew Ryan Hallowell   
27:53
 AI has made scary levels of progress recently.
 So the the I've been using ChatGPT as an example 'cause. I think it's the most prominent example. I know it's the most prominent nowadays.
 Two years ago, nobody had heard of it. Nobody knew what a large language model was. Even though people have been working on these things for a very long time.
 The the thing that's that's a little scary, kind of amazing and a little scary is that.
 ChatGPT and what's now chat or GPT 40 was created simply by exposing those models to insane amounts of text.
 And.
 All of a sudden it was able to do things. It was never even programmed to do so. For example, you know a large language model is all about language. It's all about words.
 But chat GT can draw you a picture. It wasn't trained on pictures, it was trained on text. The fact that it can create an image and a good one at that is.
 We're starting getting a borderline sentience, meaning it's thinking for itself, unable to do some things it was never told to do. So I think in in AI don't say this to scare people necessarily, but I think it's worth paying attention to that the folks who made that large language model that made ChatGPT.
 Don't know why it works as well as it does. Scholars and computer science. We don't know how and why it matured the way that it did. We just know that we fed it text and then it was able to do stuff.
 So we're starting to lose a bit of that control in terms of AI doing just exactly what we tell it to and that's it. And that can be a good thing and a bad thing. It can teach itself, it can get better. And if you ask some of the leaders in.
 AI now there's consensus that AI is gonna be smarter than we are before long. It's not gonna be able to reproduce all forms of of human creativity and other things. But if you interact with ChatGPT, it can produce some pretty creative stuff.
 So it's gonna be able to do some things that we can do better than we can do it. So you people often say, look, the computer's only as good as the people that programme it, right? That's not the case anymore. The computer can become smarter.
 Then the human who programmed it, and and that's I think that's where we've hit that tipping point now. So AI is is.
 It's time to learn about what it is. It's time to learn how to use it, and the best kind of thing I can convey to people is.
 AI can be a really good collaborator, and people who know how to use it are gonna far outpace people who don't. So you just, you know, you gotta become literate. It's like an engineer with a calculator, right?
 40 years ago, all the engineers were complaining about calculators and how it was gonna ruin everything. Now we use calculators and computers to do all forms of engineering. Still need the human intelligence to check it, know what inputs are the right, you know, correct and how to interrogate the results. So think of of AI as the new computer.
 And we really need to be good at using it and working with it and and that's I think the biggest opportunity. It's also the biggest risk because if you give a bad engineer a good calculator, they're gonna produce more bad stuff even quicker, right. But a good engineer with a good calculator is the ideal situation. So we need to think about how do we move into the situation where we have competent people using high quality AI and away from the situation of incompetent people.
 Using competent AI, that's actually more dangerous than not having AI is incompetent. People using it, so you know, we still have a role we need to to up that competence. But even if you are competent, it's time to learn about AI. It's going to make you better, faster.
Perley Brewer (Guest)   
32:00
 So on a, on a, on a personal note or curiosity, how do you use ChatGPT?
 What? What? What kind of what kind of usage would you?
Matthew Ryan Hallowell   
32:06
 Oh, I use it all. I can give you all sorts. I'll give you one. That'd been alright. So I've been working on a book and it's not on AI. It's on energy based safety.
 And sometimes, you know, as a writer, you have blind spots like you. You have idiosyncrasies. You say things the same way, or there's a section that's boring or whatever. Now, I don't use ChatGPT to do the writing for me. I I'm a writer by profession and you know, I I don't generally like the writing style that that ChatGPT.
 Umm Pass as a default, but what I do sometimes is I will give it a chapter of a book that I wrote and I say.
 What part of that chapter is born? What type of that chap? What part of that chapter are they're inducing? Am I saying that like the term focus on or something too many times where it's getting annoying? So it's kind of doing almost an editorial review for me, or a or an alpha review of my of my work and giving me some feedback.
 Sometimes I'll ask it, you know, do you think that this example I used will resonate with a professional audience and it will tell me? Nah, it's a bit too academic. Right. You need to use an analogy or something like that. So I use it as a collaborator to check my work. I don't use it to do my work. But even as a writer.
 It's really good at at helping you with things, especially if you're one of those people that maybe you're not the best writer, or if you write things and you're like it's too long. You know, you write an e-mail and you're like nobody wants to read a page long e-mail. But you know, give me the bullet points.
 You can ask chat, GBD say, here's an e-mail I'm gonna send.
 Make it half as long without losing any of the content. So anyways, long way of saying I use it to check my work, make my work better. I don't use it to do my work for me 'cause I'm in the creative space that wouldn't work very well but.
 Yeah, there are fun ways to use it too. Like first time I ever learned about ChatGPT, it was my wife's birthday, coincidentally, and I was like, hey, Robin, come check this out. And I told it.
 Please write my wife.
 Sweet and endearing birthday. Note. You know it has these couple of jokes in it or whatever. And and I clicked send and we both looked at the came out of it and she's like, oh, that's way better than anything you'd ever print. So you know, that was my my moment of.
 Yeah, maybe it can help me be even better at a lot of different things.
Perley Brewer (Guest)   
34:40
 Now, do you use it for your teaching in the classroom?
Matthew Ryan Hallowell   
34:44
 I do so when I I lots of my colleagues complain about students using ChatGPT.
 And my philosophy on this is that we should encourage our students to use those tools, but we should encourage them to use them responsibly.
 And if we're, you know, look, we're professors, we should be clever, right? And smart and clever. You'd, you'd think not always the case. But you'd think that we should be able to do that. So I'll give you an example of the way that I used it in my classes term as we just learned about the energy wheel and energy based safety. Right, how to use the concept of energy to identify hazards and find the stuff that kills you.
 And what I asked my students to do is I said alright.
 Give ChatGPT this prompt you know, summarise the energy wheel, blah blah blah right? So I I said just give it this prompt and what I want you to do as the students is critique its response.
 You're the new. You're a graduate student. You're the expert now at the stuff chat. Gpd is not gonna get it. All right, so.
 Great. It critique it. Work with it. Right. And so I just flipped it on him. I want them to use ChatGPT, but I want them to see where the limits are. Where does it work? Well, where does it not work? Well, how can they use it? How could they maybe use it in a way that's gonna be worse than what they'd produce on their own? So, yeah, I I I think it's got a place in the in the classroom. I don't use it to really create content for me. But I do use it on assignments, and I encourage my students to use it and cite how they used it.
 So I don't say, look, you're gonna get any more or less credit. I just care about your output. But I do want you to disclose to me how you used it, and I think that's that's the right way to be using it. Even the profession. Forget students. Professionals should do the same thing. If you write a LinkedIn post and you use ChatGPT to make it. First of all, anybody who knows ChatGPT will know that you did that. So you'll look like an idiot and 2nd is you should say I you know I these are ideas, you know or collaboration that I had with with ChatGPT just noted at the bottom.
 And that way you don't look like.
 A disingenuous fool to people who do know what that output looks like as a standard.


Perley Brewer (Guest)   
37:04
 Well, look, doctor. Well, I'd like to thank you very much for taking the time today to talk to us extremely informative when it comes to AI and the potential for occupational health and safety, it's an area that's certainly up and coming. And you know the big thing is that I think people do need to learn a lot more about it. If we hear about it. But I I don't think very many people probably really know about it and really do understand it, certainly to the level you do. So thank you very much. We appreciate you taking the time and.
 Best of luck in your future research and work in the field of AI.


Matthew Ryan Hallowell   
37:38
 Thank you, pearly. Thanks for having me. I enjoyed my time with you.


Perley Brewer (Guest)   
37:42
 So that's it for today's podcast. Stay safe. We will talk to you next week.
 
 

People on this episode