<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Tech &amp; AI - Intelligent Automation</title>
	<atom:link href="https://intelligentautomationtrends.com/category/tech-ai/feed/" rel="self" type="application/rss+xml" />
	<link>https://intelligentautomationtrends.com</link>
	<description></description>
	<lastBuildDate>Fri, 07 May 2021 02:23:44 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>Beanworks Introduces Artificial Intelligence (AI) for Automated Accounts Payable</title>
		<link>https://intelligentautomationtrends.com/beanworks-introduces-artificial-intelligence-ai-for-automated-accounts-payable/</link>
					<comments>https://intelligentautomationtrends.com/beanworks-introduces-artificial-intelligence-ai-for-automated-accounts-payable/#respond</comments>
		
		<dc:creator><![CDATA[Webmaster]]></dc:creator>
		<pubDate>Wed, 10 Mar 2021 01:50:27 +0000</pubDate>
				<category><![CDATA[Tech & AI]]></category>
		<guid isPermaLink="false">https://intelligentautomationtrends.com/?p=7783</guid>

					<description><![CDATA[Beanworks, the accounts payable (AP) automation leader, today announced the introduction of artificial intelligence (AI) to its data capture functionality, radically increasing the speed and accuracy of customers’ data entry. This new AI-enabled capability is called SmartCapture. Beanworks’ SmartCapture offering delivers 99%+ accuracy, while enabling the completion of AP processes in just minutes. Combined with [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Beanworks, the accounts payable (AP) automation leader, today announced the introduction of artificial intelligence (AI) to its data capture functionality, radically increasing the speed and accuracy of customers’ data entry. This new AI-enabled capability is called SmartCapture.</p>
<p>Beanworks’ SmartCapture offering delivers 99%+ accuracy, while enabling the completion of AP processes in just minutes. Combined with Beanworks’ existing SmartCoding technology, which enables an invoice to be coded with just one click, SmartCapture reduces the time accounting teams spend on data entry by more than 80%.</p>
<p>Through AI, with every invoice, Beanworks customers’ systems will become more intelligent, learning not only how to interpret AP documents but also how to code them, which saves accountants time for other, more strategic, tasks.</p>
<p>“Beanworks is committed to driving innovation and offering our customers the most accurate, efficient and delightful user experiences possible,” says Catherine Dahl, co-founder and CEO, Beanworks. “We are thrilled to unveil this artificial intelligence integration to further streamline our customers’ accounting workflows.”</p>
<p>This exciting AI innovation comes on the heels of Beanworks’ Expense Reimbursement module launch, enabling the automation of Beanworks users’ entire AP workflows in one centralized location, from purchase orders and invoices to expenses and payments.</p>
<p>Beanworks’ SmartCapture capability is available now.</p>
<p><strong>About Beanworks</strong></p>
<p>Beanworks is a cloud-based, accounts payable automation solution that helps companies transform their purchase-to-payment processes by eliminating paperwork and manual processes, significantly reducing invoice processing costs. Beanworks can reduce companies' invoice processing costs by 86 percent, mitigate AP risk and empower remote teams with accounts payable automation.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://intelligentautomationtrends.com/beanworks-introduces-artificial-intelligence-ai-for-automated-accounts-payable/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Thousands of ocean fishing boats could be using forced labor – we used AI and satellite data to find them</title>
		<link>https://intelligentautomationtrends.com/thousands-of-ocean-fishing-boats-could-be-using-forced-labor-we-used-ai-and-satellite-data-to-find-them/</link>
		
		<dc:creator><![CDATA[Webmaster]]></dc:creator>
		<pubDate>Mon, 21 Dec 2020 00:00:00 +0000</pubDate>
				<category><![CDATA[Tech & AI]]></category>
		<guid isPermaLink="false">https://intelligentautomationtrends.com/thousands-of-ocean-fishing-boats-could-be-using-forced-labor-we-used-ai-and-satellite-data-to-find-them/</guid>

					<description><![CDATA[Fishing on the high seas is a bit of a mystery, economically speaking. These areas of open ocean beyond the territorial jurisdiction of any nation are generally considered high-effort, low-payoff fishing grounds, yet fishers continue to work in them anyway. I am an environmental data scientist who leverages data and analytical techniques to answer critical [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Fishing on the high seas is a bit of a mystery, economically speaking. These areas of open ocean beyond the territorial jurisdiction of any nation are generally considered high-effort, low-payoff fishing grounds, yet fishers continue to work in them anyway.</p>
<p>I am an environmental data scientist who leverages data and analytical techniques to answer critical questions about natural resource management. Back in 2018, my colleagues at the Environmental Market Solutions Lab found that high-seas fishing often appears to be an almost entirely unprofitable endeavor. This is true even when taking government subsidies into consideration.</p>
<p>Yet fishers continue to harvest on the high seas in staggering numbers, suggesting that this activity is being financially supported beyond just government subsidies.</p>
<p>Forced labor is a known problem in open ocean fishing, but the scale has been very hard to track historically. This mystery – why so many vessels are fishing the high seas if it isn’t profitable – got our team thinking that maybe many of these vessels are, in a sense, being subsidized through low labor costs. These costs could even be zero if the vessels were using forced labor.</p>
<p>By combining our team’s data science expertise with satellite monitoring, input from human rights practitioners and machine learning algorithms, we developed a way to predict if a fishing vessel was at high risk of using forced labor. Our study shows that up to 100,000 individuals may have been victims of forced labor between 2012 and 2018 on these ships.</p>
<p><strong>Unique behavior from forced labor</strong></p>
<p>Forced labor is defined by the International Labour Organization as “all work or service which is exacted from any person under the menace of any penalty and for which the said person has not offered themself voluntarily.” Essentially, many of these workers may be enslaved, unable to stop work, trapped out on the high seas. Sadly, forced labor has been widely documented in the fishing world, but the true extent of the problem has remained largely unknown.</p>
<p>Our team wanted to say more about how forced labor is being used in fisheries, and the breakthrough came once we asked a key question that drove this project: What if vessels that forced labor behave in observable, fundamentally different ways from vessels that do not?</p>
<p>To answer this, we first looked at 22 vessels known to have used forced labor. We got their historical satellite tracking data from Global Fishing Watch – a nonprofit organization that promotes ocean sustainability using near-real-time fishing data – and used it to find commonalities in how these vessels behaved. To further inform what to look for in the satellite monitoring data, we met with human rights groups, including Liberty Shared, Greenpeace and the Environmental Justice Foundation, to determine which of these vessel behaviors might indicate a potential risk of forced labor.</p>
<p>This list of indicators included vessel behaviors like spending more time on the high seas, traveling farther from ports than other vessels and fishing more hours per day than other boats. For example, sometimes these suspicious vessels would be at sea for many months at a time.</p>
<p>Now that we had a good idea of the “risky” behaviors that signal the potential use of forced labor, our team, with the help of Google data scientists, used machine learning techniques to look for similar behavioral patterns in thousands of other vessels.</p>
<p><strong>Shockingly widespread</strong></p>
<p>We examined 16,000 fishing vessels using data from 2012 to 2018. Between 14% and 26% of those boats showed suspicious behavior that suggests a high likelihood that they are exploiting forced labor. This means that in those six years, as many as 100,000 people may have been victims of forced labor. We don’t know whether those boats are still active or how many high-risk vessels there may be on the seas today. But according to Global Fishing Watch, as of 2018, there were nearly 13,000 vessels operating in industrial longliner, trawler and squid jigger fleets.</p>
<p>Squid jiggers lure their catch to the surface at night using bright lights; longliner boats trail a line with baited hooks; and trawlers pull fishing nets through the water behind them. Squid jiggers had the highest percentage of vessels that exhibited behaviors that indicate the potential use of forced labor, followed closely by longliner fishing vessels and, to a lesser extent, trawlers.</p>
<p>Another key finding from our study is that forced labor violations are likely occurring in all major ocean basins, both on the high seas and within national jurisdictions. High-risk vessels frequented ports across 79 countries in 2018, with the ports predominantly located in Africa, Asia and South America. Also notable for frequent visits by these suspicious vessels were Canada, the United States, New Zealand and several European countries. These ports represent both potential sources of exploited labor as well as transfer points for seafood caught using forced labor.</p>
<p>As it stands now, our model is a proof of concept that still needs to be tested in the real world. By having the model assess vessels already caught using forced labor, we were able to show that the model was accurate 92% of the time when it flagged suspicious vessels. In the future, our team hopes to further validate and improve the model by gathering more information on known forced labor cases.</p>
<p><strong>Turning data into action</strong></p>
<p>Our team has built a predictive model that can identify vessels that are at high risk for engaging in forced labor. We believe our results could complement and inform existing efforts to combat human rights violations and promote supply chain transparency. Currently, our team is using individual vessel risk scores to determine forced labor risks for specific seafood products as a whole.</p>
<p>As we get more substantial data and improve the accuracy of the model, we hope that it can eventually be used to liberate victims of forced labor in fisheries, improve work conditions and help prevent human rights abuses from occurring in the first place.</p>
<p>We’re now working with Global Fishing Watch to identify partners across governments, enforcement agencies and labor groups that can use our results to more effectively target vessel inspections. These inspections offer opportunities to both catch offenders and provide more data to feed into the model, improving its accuracy.</p>
<p>Gavin McDonald,<br />
Senior Project Researcher, University of California Santa Barbara</p>
<p>This article is republished from The Conversation under a Creative Commons license. <a href="https://theconversation.com/thousands-of-ocean-fishing-boats-could-be-using-forced-labor-we-used-ai-and-satellite-data-to-find-them-152166" target="_blank" rel="noopener">Read the original article</a>.</p>
<p>&nbsp;</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>It takes a lot of energy for machines to learn – here’s why AI is so power-hungry</title>
		<link>https://intelligentautomationtrends.com/it-takes-a-lot-of-energy-for-machines-to-learn-heres-why-ai-is-so-power-hungry/</link>
		
		<dc:creator><![CDATA[Webmaster]]></dc:creator>
		<pubDate>Tue, 15 Dec 2020 00:00:00 +0000</pubDate>
				<category><![CDATA[Tech & AI]]></category>
		<guid isPermaLink="false">https://intelligentautomationtrends.com/it-takes-a-lot-of-energy-for-machines-to-learn-heres-why-ai-is-so-power-hungry/</guid>

					<description><![CDATA[This month, Google forced out a prominent AI ethics researcher after she voiced frustration with the company for making her withdraw a research paper. The paper pointed out the risks of language-processing artificial intelligence, the type used in Google Search and other text analysis products. Among the risks is the large carbon footprint of developing [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>This month, Google forced out a prominent AI ethics researcher after she voiced frustration with the company for making her withdraw a research paper. The paper pointed out the risks of language-processing artificial intelligence, the type used in Google Search and other text analysis products.</p>
<p>Among the risks is the large carbon footprint of developing this kind of AI technology. By some estimates, training an AI model generates as much carbon emissions as it takes to build and drive five cars over their lifetimes.</p>
<p>I am a researcher who studies and develops AI models, and I am all too familiar with the skyrocketing energy and financial costs of AI research. Why have AI models become so power hungry, and how are they different from traditional data center computation?</p>
<p><strong>Today’s training is inefficient</strong></p>
<p>Traditional data processing jobs done in data centers include video streaming, email and social media. AI is more computationally intensive because it needs to read through lots of data until it learns to understand it – that is, is trained.</p>
<p>This training is very inefficient compared to how people learn. Modern AI uses artificial neural networks, which are mathematical computations that mimic neurons in the human brain. The strength of connection of each neuron to its neighbor is a parameter of the network called weight. To learn how to understand language, the network starts with random weights and adjusts them until the output agrees with the correct answer.</p>
<p>A common way of training a language network is by feeding it lots of text from websites like Wikipedia and news outlets with some of the words masked out and asking it to guess the masked-out words. An example is “my dog is cute,” with the word “cute” masked out. Initially, the model gets them all wrong, but, after many rounds of adjustment, the connection weights start to change and pick up patterns in the data. The network eventually becomes accurate.</p>
<p>One recent model called Bidirectional Encoder Representations from Transformers (BERT) used 3.3 billion words from English books and Wikipedia articles. Moreover, during training BERT read this data set not once, but 40 times. To compare, an average child learning to talk might hear 45 million words by age five, 3,000 times fewer than BERT.</p>
<p><strong>Looking for the right structure</strong></p>
<p>What makes language models even more costly to build is that this training process happens many times during the course of development. This is because researchers want to find the best structure for the network – how many neurons, how many connections between neurons, how fast the parameters should be changing during learning and so on. The more combinations they try, the better the chance that the network achieves a high accuracy. Human brains, in contrast, do not need to find an optimal structure – they come with a prebuilt structure that has been honed by evolution.</p>
<p>As companies and academics compete in the AI space, the pressure is on to improve on the state of the art. Even achieving a 1% improvement in accuracy on difficult tasks like machine translation is considered significant and leads to good publicity and better products. But to get that 1% improvement, one researcher might train the model thousands of times, each time with a different structure, until the best one is found.</p>
<p>Researchers at the University of Massachusetts Amherst estimated the energy cost of developing AI language models by measuring the power consumption of common hardware used during training. They found that training BERT once has the carbon footprint of a passenger flying a round trip between New York and San Francisco. However, by searching using different structures – that is, by training the algorithm multiple times on the data with slightly different numbers of neurons, connections and other parameters – the cost became the equivalent of 315 passengers, or an entire 747 jet.</p>
<p><strong>Bigger and hotter</strong></p>
<p>AI models are also much bigger than they need to be, and growing larger every year. A more recent language model similar to BERT, called  GPT-2, has 1.5 billion weights in its network. GPT-3, which created a stir this year because of its high accuracy, has 175 billion weights.</p>
<p>Researchers discovered that having larger networks leads to better accuracy, even if only a tiny fraction of the network ends up being useful. Something similar happens in children’s brains when neuronal connections are first added and then reduced, but the biological brain is much more energy efficient than computers.</p>
<p>AI models are trained on specialized hardware like graphics processor units, which draw more power than traditional CPUs. If you own a gaming laptop, it probably has one of these graphics processor units to create advanced graphics for, say, playing Minecraft RTX. You might also notice that they generate a lot more heat than regular laptops.</p>
<p>All of this means that developing advanced AI models is adding up to a large carbon footprint. Unless we switch to 100% renewable energy sources, AI progress may stand at odds with the goals of cutting greenhouse emissions and slowing down climate change. The financial cost of development is also becoming so high that only a few select labs can afford to do it, and they will be the ones to set the agenda for what kinds of AI models get developed.</p>
<p><strong>Doing more with less</strong></p>
<p>What does this mean for the future of AI research? Things may not be as bleak as they look. The cost of training might come down as more efficient training methods are invented. Similarly, while data center energy use was predicted to explode in recent years, this has not happened due to improvements in data center efficiency, more efficient hardware and cooling.</p>
<p>There is also a trade-off between the cost of training the models and the cost of using them, so spending more energy at training time to come up with a smaller model might actually make using them cheaper. Because a model will be used many times in its lifetime, that can add up to large energy savings.</p>
<p>In my lab’s research, we have been looking at ways to make AI models smaller by sharing weights, or using the same weights in multiple parts of the network. We call these shapeshifter networks because a small set of weights can be reconfigured into a larger network of any shape or structure. Other researchers have shown that weight-sharing has better performance in the same amount of training time.</p>
<p>Looking forward, the AI community should invest more in developing energy-efficient training schemes. Otherwise, it risks having AI become dominated by a select few who can afford to set the agenda, including what kinds of models are developed, what kinds of data are used to train them and what the models are used for.</p>
<p>Kate Saenko<br />
<em>Associate Professor of Computer Science, Boston University</em></p>
<p>This article is republished from The Conversation under a Creative Commons license. <a href="https://theconversation.com/it-takes-a-lot-of-energy-for-machines-to-learn-heres-why-ai-is-so-power-hungry-151825" target="_blank" rel="noopener">Read the original article.</a></p>
<p>&nbsp;</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>What is a neural network? A computer scientist explains</title>
		<link>https://intelligentautomationtrends.com/what-is-a-neural-network-a-computer-scientist-explains/</link>
		
		<dc:creator><![CDATA[Webmaster]]></dc:creator>
		<pubDate>Fri, 11 Dec 2020 00:00:00 +0000</pubDate>
				<category><![CDATA[Tech & AI]]></category>
		<guid isPermaLink="false">https://intelligentautomationtrends.com/what-is-a-neural-network-a-computer-scientist-explains/</guid>

					<description><![CDATA[What is a neural network? Explained by a computer scientist Editor’s note: One of the central technologies of artificial intelligence is neural networks. In this interview, Tam Nguyen, a professor of computer science at the University of Dayton, explains how neural networks, programs in which a series of algorithms try to simulate the human brain [&#8230;]]]></description>
										<content:encoded><![CDATA[<p><strong>What is a neural network? Explained by a computer scientist </strong></p>
<p>Editor’s note: One of the central technologies of artificial intelligence is neural networks. In this interview, Tam Nguyen, a professor of computer science at the University of Dayton, explains how neural networks, programs in which a series of algorithms try to simulate the human brain work.</p>
<p><strong>What are some examples of neural networks that are familiar to most people?</strong></p>
<p>There are many applications of neural networks. One common example is your smartphone camera’s ability to recognize faces.</p>
<p>Driverless cars are equipped with multiple cameras which try to recognize other vehicles, traffic signs and pedestrians by using neural networks, and turn or adjust their speed accordingly.</p>
<p>Neural networks are also behind the text suggestions you see while writing texts or emails, and even in the translations tools available online.</p>
<p>Does the network need to have prior knowledge of something to be able to classify or recognize it?</p>
<p>Yes, that’s why there is a need to use big data in training neural networks. They work because they are trained on vast amounts of data to then recognize, classify and predict things.</p>
<p>In the driverless cars example, it would need to look at millions of images and video of all the things on the street and be told what each of those things is. When you click on the images of crosswalks to prove that you’re not a robot while browsing the internet, it can also be used to help train a neural network. Only after seeing millions of crosswalks, from all different angles and lighting conditions, would a self-driving car be able to recognize them when it’s driving around in real life.</p>
<p>More complicated neural networks are actually able to teach themselves. In the video linked below, the network is given the task of going from point A to point B, and you can see it trying all sorts of things to try to get the model to the end of the course, until it finds one that does the best job.</p>
<p>Some neural networks can work together to create something new. In this example, the networks create virtual faces that don’t belong to real people when you refresh the screen. One network makes an attempt at creating a face, and the other tries to judge whether it is real or fake. They go back and forth until the second one cannot tell that the face created by the first is fake.</p>
<p>Humans take advantage of big data too. A person perceives around 30 frames or images per second, which means 1,800 images per minute, and over 600 million images per year. That is why we should give neural networks a similar opportunity to have the big data for training.</p>
<p><strong>How does a basic neural network work?</strong></p>
<p>A neural network is a network of artificial neurons programmed in software. It tries to simulate the human brain, so it has many layers of “neurons” just like the neurons in our brain. The first layer of neurons will receive inputs like images, video, sound, text, etc. This input data goes through all the layers, as the output of one layer is fed into the next layer.</p>
<p>Let’s take an example of a neural network that is trained to recognize dogs and cats. The first layer of neurons will break up this image into areas of light and dark. This data will be fed into the next layer to recognize edges. The next layer would then try to recognize the shapes formed by the combination of edges. The data would go through several layers in a similar fashion to finally recognize whether the image you showed it is a dog or a cat according to the data it’s been trained on.</p>
<p>These networks can be incredibly complex and consist of millions of parameters to classify and recognize the input it receives.</p>
<p>Why are we seeing so many applications of neural networks now?</p>
<p>Actually neural networks were invented a long time ago, in 1943, when Warren McCulloch and Walter Pitts created a computational model for neural networks based on algorithms. Then the idea went through a long hibernation because the immense computational resources needed to build neural networks did not exist yet.</p>
<p>Recently, the idea has come back in a big way, thanks to advanced computational resources like graphical processing units (GPUs). They are chips that have been used for processing graphics in video games, but it turns out that they are excellent for crunching the data required to run neural networks too. That is why we now see the proliferation of neural networks.</p>
<p>Tam Nguyen, Assistant Professor, University of Dayton</p>
<p>This article is republished from The Conversation under a Creative Commons license. <a href="https://theconversation.com/what-is-a-neural-network-a-computer-scientist-explains-151897" target="_blank" rel="noopener">Read the original article.</a></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>AI needs to face up to its invisible-worker problem</title>
		<link>https://intelligentautomationtrends.com/ai-needs-to-face-up-to-its-invisible-worker-problem/</link>
		
		<dc:creator><![CDATA[Webmaster]]></dc:creator>
		<pubDate>Fri, 11 Dec 2020 00:00:00 +0000</pubDate>
				<category><![CDATA[Tech & AI]]></category>
		<guid isPermaLink="false">https://intelligentautomationtrends.com/ai-needs-to-face-up-to-its-invisible-worker-problem/</guid>

					<description><![CDATA[Many of the most successful and widely used machine-learning models are trained with the help of thousands of low-paid gig workers. Millions of people around the world earn money on platforms like Amazon Mechanical Turk, which allow companies and researchers to outsource small tasks to online crowdworkers. According to one estimate, more than a million [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Many of the most successful and widely used machine-learning models are trained with the help of thousands of low-paid gig workers. Millions of people around the world earn money on platforms like Amazon Mechanical Turk, which allow companies and researchers to outsource small tasks to online crowdworkers. According to one estimate, more than a million people in the US alone earn money each month by doing work on these platforms. Around 250,000 of them earn at least three-quarters of their income this way. But even though many work for some of the richest AI labs in the world, they are paid below minimum wage and given no opportunities to develop their skills.</p>
<p>Saiph Savage is the director of the human-computer interaction lab at West Virginia University, where she works on civic technology, focusing on issues such as fighting disinformation and helping gig workers improve their working conditions. This week she gave an invited talk at NeurIPS, one of the world’s biggest AI conferences, titled “A future of work for the invisible workers in AI.” I talked to Savage on Zoom the day before she gave her talk.</p>
<p>Our conversation has been edited for clarity and length, to read the full article <a href="https://www.technologyreview.com/2020/12/11/1014081/ai-machine-learning-crowd-gig-worker-problem-amazon-mechanical-turk/" target="_blank" rel="noopener">click here</a></p>
<p>&nbsp;</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Robots are playing many roles in the coronavirus crisis – and offering lessons for future disasters</title>
		<link>https://intelligentautomationtrends.com/robots-are-playing-many-roles-in-the-coronavirus-crisis-and-offering-lessons-for-future-disasters/</link>
		
		<dc:creator><![CDATA[Webmaster]]></dc:creator>
		<pubDate>Sat, 22 Aug 2020 00:00:00 +0000</pubDate>
				<category><![CDATA[Tech & AI]]></category>
		<guid isPermaLink="false">https://intelligentautomationtrends.com/robots-are-playing-many-roles-in-the-coronavirus-crisis-and-offering-lessons-for-future-disasters/</guid>

					<description><![CDATA[A cylindrical robot rolls into a treatment room to allow health care workers to remotely take temperatures and measure blood pressure and oxygen saturation from patients hooked up to a ventilator. Another robot that looks like a pair of large fluorescent lights rotated vertically travels throughout a hospital disinfecting with ultraviolet light. Meanwhile a cart-like [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>A cylindrical robot rolls into a treatment room to allow health care workers to remotely take temperatures and measure blood pressure and oxygen saturation from patients hooked up to a ventilator. Another robot that looks like a pair of large fluorescent lights rotated vertically travels throughout a hospital disinfecting with ultraviolet light. Meanwhile a cart-like robot brings food to people quarantined in a 16-story hotel. Outside, quadcopter drones ferry test samples to laboratories and watch for violations of stay-at-home restrictions.</p>
<p>These are just a few of the two dozen ways robots have been used during the COVID-19 pandemic, from health care in and out of hospitals, automation of testing, supporting public safety and public works, to continuing daily work and life.</p>
<p>The lessons they’re teaching for the future are the same lessons learned at previous disasters but quickly forgotten as interest and funding faded. The best robots for a disaster are the robots, like those in these examples, that already exist in the health care and public safety sectors.</p>
<p>Research laboratories and startups are creating new robots, including one designed to allow health care workers to remotely take blood samples and  perform mouth swabs. These prototypes are unlikely to make a difference now. However, the robots under development could make a difference in future disasters if momentum for robotics research continues.</p>
<p><strong>Robots around the world</strong></p>
<p>As roboticists at Texas A&amp;M University and the Center for Robot-Assisted Search and Rescue, we examined over 120 press and social media reports from China, the U.S. and 19 other countries about how robots are being used during the COVID-19 pandemic. We found that ground and aerial robots are playing a notable role in almost every aspect of managing the crisis.</p>
<p>In hospitals, doctors and nurses, family members and even receptionists are using robots to interact in real time with patients from a safe distance. Specialized robots are disinfecting rooms and delivering meals or prescriptions, handling the hidden extra work associated with a surge in patients. Delivery robots are transporting infectious samples to laboratories for testing.</p>
<p>Outside of hospitals, public works and public safety departments are using robots to spray disinfectant throughout public spaces. Drones are providing thermal imagery to help identify infected citizens and enforce quarantines and social distancing restrictions. Robots are even rolling through crowds, broadcasting public service messages about the virus and social distancing.</p>
<p>At work and home, robots are assisting in surprising ways. Realtors are teleoperating robots to show properties from the safety of their own homes. Workers building a new hospital in China were able work through the night because drones carried lighting. In Japan, students used robots to walk the stage for graduation, and in Cyprus, a person used a drone to walk his dog without violating stay-at-home restrictions.</p>
<p><strong>Helping workers, not replacing them</strong></p>
<p>Every disaster is different, but the experience of using robots for the COVID-19 pandemic presents an opportunity to finally learn three lessons documented over the past 20 years. One important lesson is that during a disaster robots do not replace people. They either perform tasks that a person could not do or do safely, or take on tasks that free up responders to handle the increased workload.</p>
<p>The majority of robots being used in hospitals treating COVID-19 patients have not replaced health care professionals. These robots are teleoperated, enabling the health care workers to apply their expertise and compassion to sick and isolated patients remotely.</p>
<p>A small number of robots are autonomous, such as the popular UVD decontamination robots and meal and prescription carts. But the reports indicate that the robots are not displacing workers. Instead, the robots are helping the existing hospital staff cope with the surge in infectious patients. The decontamination robots disinfect better and faster than human cleaners, while the carts reduce the amount of time and personal protective equipment nurses and aides must spend on ancillary tasks.</p>
<p><strong>Off-the-shelf over prototypes</strong></p>
<p>The second lesson is the robots used during an emergency are usually already in common use before the disaster. Technologists often rush out well-intentioned prototypes, but during an emergency, responders – health care workers and search-and-rescue teams – are too busy and stressed to learn to use something new and unfamiliar. They typically can’t absorb the unanticipated tasks and procedures, like having to frequently reboot or change batteries, that usually accompany new technology.</p>
<p>Fortunately, responders adopt technologies that their peers have used extensively and shown to work. For example, decontamination robots were already in daily use at many locations for preventing hospital-acquired infections. Sometimes responders also adapt existing robots. For example, agricultural drones designed for spraying pesticides in open fields are being adapted for spraying disinfectants in crowded urban cityscapes in China and India.</p>
<p>A third lesson follows from the second. Repurposing existing robots is generally more effective than building specialized prototypes. Building a new, specialized robot for a task takes years. Imagine trying to build a new kind of automobile from scratch. Even if such a car could be quickly designed and manufactured, only a few cars would be produced at first and they would likely lack the reliability, ease of use and safety that comes from months or years of feedback from continuous use.</p>
<p>Alternatively, a faster and more scalable approach is to modify existing cars or trucks. This is how robots are being configured for COVID-19 applications. For example, responders began using the thermal cameras already on bomb squad robots and drones – common in most large cities – to detect infected citizens running a high fever. While the jury is still out on whether thermal imaging is effective, the point is that existing public safety robots were rapidly repurposed for public health.</p>
<p><strong>Don’t stockpile robots</strong></p>
<p>The broad use of robots for COVID-19 is a strong indication that the health care system needed more robots, just like it needed more of everyday items such as personal protective equipment and ventilators. But while storing caches of hospital supplies makes sense, storing a cache of specialized robots for use in a future emergency does not.</p>
<p>This was the strategy of the nuclear power industry, and it failed during the Fukushima Daiichi nuclear accident. The robots stored by the Japanese Atomic Energy Agency for an emergency were outdated, and the operators were rusty or no longer employed. Instead, the Tokyo Electric Power Company lost valuable time acquiring and deploying commercial off-the-shelf bomb squad robots, which were in routine use throughout the world. While the commercial robots were not perfect for dealing with a radiological emergency, they were good enough and cheap enough for dozens of robots to be used throughout the facility.</p>
<p><strong>Robots in future pandemics</strong></p>
<p>Hopefully, COVID-19 will accelerate the adoption of existing robots and their adaptation to new niches, but it might also lead to new robots. Laboratory and supply chain automation is emerging as an overlooked opportunity. Automating the slow COVID-19 test processing that relies on a small set of labs and specially trained workers would eliminate some of the delays currently being experienced in many parts of the U.S.</p>
<p>Automation is not particularly exciting, but just like the unglamorous disinfecting robots in use now, it is a valuable application. If government and industry have finally learned the lessons from previous disasters, more mundane robots will be ready to work side by side with the health care workers on the front lines when the next pandemic arrives.</p>
<p>Robin R. Murphy<br />
<em>Raytheon Professor of Computer Science and Engineering; Vice-President Center for Robot-Assisted Search and Rescue (nfp), Texas A&amp;M University</em></p>
<p>Justin Adams<br />
<em>President of the Center for Robot-Assisted Search and Rescue/Research Fellow - The Center for Disaster Risk Policy, Florida State University</em></p>
<p>Vignesh Babu Manjunath Gandudi<br />
<em>Graduate Teaching Assistant, Texas A&amp;M University</em></p>
<p>This article is republished from The Conversation under a Creative Commons license. <a href="https://theconversation.com/robots-are-playing-many-roles-in-the-coronavirus-crisis-and-offering-lessons-for-future-disasters-135527" target="_blank" rel="noopener">Read the original article.</a></p>
<p>&nbsp;</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Blue Prism&#039;s Digital Workforce Now Available to U.S. Army Through CHESS Program</title>
		<link>https://intelligentautomationtrends.com/blue-prisms-digital-workforce-now-available-to-u-s-army-through-chess-program/</link>
		
		<dc:creator><![CDATA[Webmaster]]></dc:creator>
		<pubDate>Wed, 20 May 2020 00:00:00 +0000</pubDate>
				<category><![CDATA[Tech & AI]]></category>
		<guid isPermaLink="false">https://intelligentautomationtrends.com/blue-prisms-digital-workforce-now-available-to-u-s-army-through-chess-program/</guid>

					<description><![CDATA[Broadening accessibility to its Digital Workforce, Blue Prism (AIM: PRSM), a global leader in Robotic Process Automation (RPA), today announced that is has partnered with the ImmixGroup to deliver intelligent automation solutions to the U.S. Army via its CHESS (Computer Hardware Enterprise Software and Solutions) Program. Blue Prism's software has previously been available to Federal, [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Broadening accessibility to its Digital Workforce, Blue Prism (AIM: PRSM), a global leader in Robotic Process Automation (RPA), today announced that is has partnered with the ImmixGroup to deliver intelligent automation solutions to the U.S. Army via its CHESS (Computer Hardware Enterprise Software and Solutions) Program. Blue Prism's software has previously been available to Federal, state and local government agencies through the ImmixGroup General Service Administration (GSA) Schedule.</p>
<p>As the demand for RPA software within the U.S. Army increases, Blue Prism is working with ImmixGroup to simplify its procurement. As the main provider of commercial enterprise information technology (IT) solutions for the U.S. Army, CHESS allows authorized commissioners to procure a wide array of IT services and solutions, including RPA. Blue Prism is now a CHESS-approved software provider, making it easy for end-users to purchase Blue Prism's Digital Workforce.</p>
<p>"Blue Prism is excited to partner with ImmixGroup to support the U.S. Army through this strategic contract initiative," says Mike Pullman, Regional VP of Alliances &amp; Channels, Public Sector for Blue Prism. "By making Blue Prism's Digital Workforce available on the ITES Software vehicle, this team can provide the transformative intelligent automation solutions the U.S. Army requires."</p>
<p>Blue Prism's connected-RPA offering provides government agencies with an intelligent Digital Workforce (software robots) capable of self-learning and continuous improvement, that provides access to a rich array of AI and cognitive capabilities through a drag-and-drop interface. By pairing a Digital Workforce with a nimble, up-skilled Federal workforce, the U.S. Army and other government agencies can keep their missions cost-effective, streamlined and sustainable through task automation that works within existing governance and security policies.</p>
<p>Government agencies can use Blue Prism to help deliver more output, with fewer resources, while freeing up human employees' time from repetitive tasks to focus on higher-value cognitive work. This enables a more citizen-centric approach, by increasing the overall quality of the services provided to citizens, coupled with improved consistency and overall faster delivery. It also gives agencies a game changing way of staying viable by easily accessing and exploiting leading-edge cloud, AI and cognitive capabilities.</p>
<p>Through partnerships with the world's foremost cognitive computing and AI technology companies, Blue Prism is rapidly evolving the capabilities and intelligence of its Digital Workforce so they can apply their multiple skills to any functional area of an organization. As an off-the-shelf solution, government agencies can integrate Blue Prism into their processes, leveraging the intelligent automation platform of their choice. Blue Prism is secure and compliant supporting industry-leading standards such as CERT Secure Coding as well as achieving Veracode Verified Continuous Accreditation. Leading organizations also trust Blue Prism to support their compliance with PCI and HIPAA processes.</p>
<p><strong>About Blue Prism</strong></p>
<p>Blue Prism's vision is to provide a Digital Workforce for Every Enterprise. The company's purpose is to unleash the collaborative potential of humans, operating in harmony with a Digital Workforce, so every enterprise can exceed their business goals and drive meaningful growth, with unmatched speed and agility.</p>
<p>Fortune 500 and public-sector organizations, among customers across 70 commercial sectors, trust Blue Prism's enterprise-grade connected-RPA platform, which has users in more than 170 countries. By strategically applying intelligent automation, these organizations are creating new opportunities and services, while unlocking massive efficiencies that return millions of hours of work back into their business.</p>
<p>Available on-premises, in the cloud, hybrid, or as an integrated SaaS solution, Blue Prism's Digital Workforce automates ever more complex, end-to-end processes that drive a true digital transformation, collaboratively, at scale and across the entire enterprise.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
