< 1 min reading time
I was having a chat with a friend who was telling me how tech startups are coming up with technologies aimed to replace accountants. While this shift towards automation may not be something new, obviously we have start seeing McDonalds among many others replacing servers with automated ordering machines, however it seems that this shift is starting expand not just vertically, but horizontally into roles that were once considered knowledge based. Do you guys think that there is a chance that technology will one day greatly replace most QARA Professionals? source: https://www.linkedin.com/groups/78665/78665-6155799730891878404 Marked as spam
|
Meet your next client here. Join our medical devices group community.
Private answer
Julie Omohundro
There are a lot of activities that get bundled together under QARA. Some of them may lend themselves to automation; others probably not so much.
Marked as spam
|
|
Private answer
Given the known fallibility of manual checking and processing of anything, especially tables of numbers and figures, I agree with Julie O. I think the answer must be a qualified yes. At Quantics Biostatistics we have been working hard on improving our in-house QA automation for GxP and I’ll be talking about an aspect of this at the IABS conference in Rockville in October.
Marked as spam
|
|
Private answer
Feng-Chi Ho
Any things may replaced by again and again; but time never allow you replace one more!
Marked as spam
|
|
Private answer
Dr. Patrick Druggan
Yes it will. Liverpool university have developed an AI system for making judicial decisions. Out of 37 cases judged In the high court the computer got 36 out of 37 right
Marked as spam
|
|
Private answer
Carolyn Hunt
This is exactly what the world needs: AI makes the calls on guilt or innocence. The mind boggles at the potential for savings. Since we can rely on AI to make the right decision, we can dispense with appeals. We can also rid the world of the major annoyance caused by lawyers--something humankind has been trying to do for centuries (see The Merchant of Venice). Taking it a step further, why not just have AI analyze babies as soon as they're born and off the potential troublemakers before they leave the hospital nursery. In fact, I think we should think very hard about whether we need people at all.
Signed, HAL (For those of you who don't recognize sarcasm, allow me to spell it out: I'm being sarcastic. Except for the lawyer part. Marked as spam
|
|
Private answer
Carolyn Hunt
Just what we need–a world where we're judged by AI. I'm guessing there won't be any appeals.
Marked as spam
|
|
Private answer
Dr. Patrick Druggan
Carolyn Hunt there would be appeals as each AI system would be unique, just like humans. The issue would be sentencing - that would probably be restricted to human judges in capital crime
Marked as spam
|
|
Private answer
Jose O Cotta
Just combination of two functions QA & RA means the chance of replacement becomes more remote. With the introduction of risk management systems as a basis of regulatory systems globally adds another hurdle to automation. Let us all have faith and belief in our positive contribution. With regards to the high court example referred earlier it would worth while to know if the 36 right cases there is no reference to appeals involved
Marked as spam
|
|
Private answer
Julie Omohundro
Carolyn, just because you're being sarcastic doesn't mean they aren't out to get you. :)
The rise of tech is strongly linked to the rise of anti-humanism. Businesses don't want to deal with humans, thus the rise of B2B. The internet serves as a wall for businesses to hide behind. Every time I type a message in one of these little windows, safely constrained by a character limit, I think of the food slot in the door of a jail cell. Marked as spam
|
|
Private answer
Matthew Davies
If they can automate article writing for sports reporting, something that would seem to require a certain amount of creativity, then surely they can plug in the algorithms for all the various tax laws.
Marked as spam
|
|
Private answer
Marc Timothy Smith
Carolyn - I'm 66 and was accepted to Tulane medical school years ago, but decided not to go because I decided it wasn't my "calling". My father was a neurologist and my brother (now in his early 70's) is a pediatrician. Your statement really isn't far off, according to my brother. In 50 years or so babies will be tested and analyzed *before* birth and many deformities and abnormalities will be corrected prior to birth. Technology is advancing at an unbelievable rate. BTW - My other older brother is a dentist - To see what I'm talking about, "Google" Stem Cells Could End Root Canals - And, it has even been shown that genetics plays a part in whether or not a person will get cavities.
Marked as spam
|
|
Private answer
G M Butcher
any chance i can get AI to provide linkedin replies so no need to use my own I?
Marked as spam
|
|
Private answer
Dr. Patrick Druggan
Jose - none of these are obstacles to AI, especially not risk management. It is not algorithm based. It is based on neural networks, like the brain. You give it examples, it learns. It makes a mistake, it learns.
Marked as spam
|
|
Private answer
Jonathan Loo
Self-learning AI is definitely a huge development in artificial intelligence, but for the foreseeable future, it still takes a long time for the AI to develop to one single set of rules. In our world whereby standards are revised, regulations amended and devices being re-designed on a constant basis, I agree what Jose said to be very true. But surely we can get simple AI to replace much of the less subjective roles? Such as report writing, registration dossier preparation and protocol design?
Marked as spam
|
|
Private answer
Julie Omohundro
I attended an early conference on AI and neural networks, somewhere in Southern California, circa 1980.
I was just a student tag-along, so afterwards I asked one of the cognitive PhDs in the crowd what it all meant. "As far as I can tell, where we used to have one type of intelligence we didn't understand, now we have two." Her observation is still very much on target today, IMO. OF course there is a lot of stuff being called "AI" that people do understand, but that's sort of like IQ, which only measures "intelligence" because everyone says it does. Marked as spam
|
|
Private answer
Being in my early 60's I couldn't tell you how many times I've heard decried that we'll be replaced by AI. Way back in the 1980's - the Stone Age right? - I was in a fully automated, plastic injection Mold Making shop in Switzerland; at a time when US mold-makers would have said it wasn't possible. As a recruiter I've discovered a harsh reality; AI and/or Automation comes into play when the labor pool is too shallow to get the work done. Period. Our current labor pool of trained experience is more shallow than I've eve seen it.
Marked as spam
|
|
Private answer
Dr. Patrick Druggan
http://www.liverpoolecho.co.uk/news/business/wirrals-riverview-law-buys-based-9970916
Marked as spam
|
|
Private answer
You can error proof established processes and record keeping. But I can't picture AI doing a root cause investigation, implementing a corrective action, assuring compliance, doing internal audits or explaining Design Control to R&D. There's a lot of judgment and knowledge in QA.
Marked as spam
|
|
|
|
Private answer
Jeff King
I think if you're living in a world of black and white then AI may be able to take over, but RA and QA lives a lot in the "gray" areas and requires thinking, judgement, rapport with colleagues and notified bodies, and sometimes a gut feeling. I think it may be a great new tool that people can use, but I don't think anyone will be replaced. I heard years ago about job boards and even LinkedIn killing the recruiting industry by replacing recruiters. I guess as a Regulatory and Quality recruiter I'm in double jeopardy of being replaced, and yet somehow I've never been busier...
Marked as spam
|
|
Private answer
Marc Timothy Smith
This all really depends upon the time line being considered. Technology always, over time, replaces or significantly reduces, the need for humans. If we are looking at 10 years, not so much. If we look at 50 years, 100 years, or 500 years, that is another story. My only point being is we tend to limit the time period for predictions such as this.
Marked as spam
|
|
Private answer
Dr. Patrick Druggan
The test of this is will you buy an AI system rather than recruit someone, saw for one in five people oin your RA QA department? The answer for me would be based on cost and quality of decision. a
Marked as spam
|
|
Private answer
Dr. Patrick Druggan
Jeff, they are already talking about this for courts in Britain - law is black and white, justice is grey. We need to think beyond now and to the future. An AI system can train continually, it doesn't retire, only upgrade. You don't need rapport when you are dealing with another machine.
Marked as spam
|
|
|
|
Private answer
Jeff King
Several good points made here. Yes, laws and regulations are black and white, but the interpretation creates the gray area. The notified bodies create the regulations, but even their own investigators can have a difference of opinion on how those regulations are to be applied and adhered to. Patrick, you make a good point about machines talking to machines don't need rapport, and would have the same interpretation. But to Marc's point, how long before enough of those machines are in place to where there would be no human involvement?
Marked as spam
|
|
Private answer
Stacy Livingston
Whether or not the technology is there to replace some of their activities, it is crucial for companies in the medical industry to have the in-house expertise in order to support the documentation -i.e. regular audits, technical updates, etc. Liability cannot be passed to a 3rd party and thus the technology cannot be held responsible for QARA matters. I think the industry remains pretty safe with new tools to help them to be more effective :)
Marked as spam
|
|
Private answer
Michael Lehmicke
Great question. AI already operates in "grey areas" (e.g. facial recognition, understanding and interpreting ambiguous statements based on context). Can it replace a QARA professional? Maybe it depends on the individual QARA professional.
Marked as spam
|
|
Private answer
Marie Suetsugu
'One day', perhaps...but a large part of my work is updating and improving the quality system by revising the QM and other SOPs. Given the feeble quality of Google Translate, I'd say: No for the foreseeable future.
Marked as spam
|
|
Private answer
Dr. Patrick Druggan
Jeff King as Sracy has mentioned already, I don't think there would ever be a all AI system.
Marked as spam
|
|
Private answer
Dr. Patrick Druggan
Marie Suetsugu the AI system would automatically translate documents - it would be a fundemental part of the AI system.
Marked as spam
|
|
Private answer
Marie Suetsugu
Dear Patrick,
I'm sorry but my limited experience tells me that automatic translation doesn't work for non-Indo-European languages (yet). Marked as spam
|
|
Private answer
Julie Omohundro
So far I haven't found it to work exceptionally well for English to anything or anything to English.
Marked as spam
|
|
Private answer
As anything, technology can improve the outcome of repetitive tasks which are prone to common mistakes. Such in the case of McDonald's automated ordering machines or robots building cars, the ability to automated testing. However, the question posed by Jonathan is much larger, can technology replace a human’s ability to think and reason? Artificial Intelligence has improved and will continue to grow, but can AI ever develop plausible reasoning? Which is one of the main strengths of a good QARA professional, I say no for now.
Marked as spam
|
|
Private answer
Gunther Reinhardt
Technological change in the work place will always have some people whom take the change on board and make it their own, while others will feel left out and side-lined. We as a society owe it to ourselves to help manage this change towards further automation because like the global TPP, with out bringing people along, we will create a disenfranchised underclass and consequently a very uneven society.
Marked as spam
|