SWGDE

published documents

SWGDE Overview: Artificial Intelligence Trends in Video Analysis

20-v-001

Disclaimer and Conditions Regarding Use of SWGDE Documents:

SWGDE documents are developed by a consensus process that involves the best efforts of relevant subject matter experts, organizations, and input from other stakeholders to publish suggested best practices, practical guidance, technical positions, and educational information in the discipline of digital and multi-media forensics and related fields. No warranty or other representation as to SWGDE work product is made or intended.

As a condition to the use of this document (and the information contained herein) in any judicial, administrative, legislative, or other adjudicatory proceeding in the United States or elsewhere, the SWGDE requests notification by e-mail before or contemporaneous to the introduction of this document, or any portion thereof, as a marked exhibit offered for or moved into evidence in such proceeding.. The notification should include: 1) The formal name of the proceeding, including docket number or similar identifier; 2) the name and location of the body conducting the hearing or proceeding; and, 3) the name, mailing address (if available) and contact information of the party offering or moving the document into evidence. Subsequent to the use of this document in the proceeding please notify SWGDE as to the outcome of the matter. Notifications should be sent to secretary@swgde.org.

From time to time, SWGDE documents may be revised, updated, or sunsetted. Readers are advised to verify on the SWGDE website (www.swgde.org) they are utilizing the current version of this document. Prior versions of SWGDE documents are archived and available on the SWGDE website.

Redistribution Policy:

SWGDE grants permission for redistribution and use of all publicly posted documents created by SWGDE, provided that the following conditions are met:

  1. Redistribution of documents or parts of documents must retain this SWGDE cover page containing the Disclaimer and Conditions of Use.
  2. Neither the name of SWGDE nor the names of contributors may be used to endorse or promote products derived from its documents.
  3. Any reference or quote from a SWGDE document must include the version number (or creation date) of the document and also indicate if the document is in a draft status.

Requests for Modification:

SWGDE encourages stakeholder participation in the preparation of documents. Suggestions for modifications are welcome and must be forwarded to the Secretary in writing at secretary@swgde.org. The following information is required as a part of any suggested modification:

  1. Submitter’s name
  2. Affiliation (agency/organization)
  3. Address
  4. Telephone number and email address
  5. SWGDE Document title and version number
  6. Change from (note document section number)
  7. Change to (provide suggested text where appropriate; comments not including suggested text will not be considered)
  8. Basis for suggested modification

Intellectual Property:

Unauthorized use of the SWGDE logo or documents without written permission from SWGDE is a violation of our intellectual property rights.

Individuals may not misstate and/or over represent duties and responsibilities of SWGDE work. This includes claiming oneself as a contributing member without actively participating in SWGDE meetings; claiming oneself as an officer of SWGDE without serving as such; claiming sole authorship of a document; use the SWGDE logo on any material and/or curriculum vitae.

Any mention of specific products within SWGDE documents is for informational purposes only; it does not imply a recommendation or endorsement by SWGDE.

Table of Contents

Table of Figures

Figure 1. Example of a deepfake video produced using deep learning techniques.

Figure 2. Computer generated faces using Generative Adversarial Networks trained on the CelebA dataset of celebrity images to create high resolution face and head imagery of non-existent humans.

1. Purpose

The purpose of this document is to provide a brief informational snapshot of some currently available and emerging artificial intelligence (AI) technologies relating to video content analysis that may be of interest to investigators and forensic analysts involved in the collection, acquisition, review and processing of digital multimedia evidence.

2. Scope

The document is intended for use by investigators who already have an understanding of digital evidence principles and are seeking greater knowledge about this particular topic. It outlines a number of possible beneficial uses of these advanced technologies, while highlighting some technical and legal challenges related to their potential for use in the manipulation of digital multimedia files. It also includes cautionary information and suggestions for the responsible, legitimate use of artificial intelligence in criminal investigations. For the purpose of this document, personnel utilizing this technology will be referred to as “investigator”.

3. Limitations

This document is not intended to provide legal advice or replace an organization’s Standard Operating Procedures (SOP). Any operational guidance derived from this document should be implemented only after consultation with legal personnel versed in the laws and rules applicable to the investigator’s particular jurisdiction. Additionally, this document is not all-inclusive, is not intended to state a position for or against the use of artificial intelligence in the acquisition or processing of digital multimedia evidence, or provide instructions or suggestions for use of the described technologies. It is offered solely as an informational overview of the technology to increase awareness of its potential impact on the digital evidence community. Mentions of any software providers or applications are intended only to illustrate some of the options available at the time of this writing. They do not represent endorsements or recommendations of any specific products.

4. Terminology

Artificial Intelligence (AI) – An artificial system developed in computer software, physical hardware, or other context that solves tasks requiring human-like perception, cognition, planning, learning, communication, or physical action [1].

Artificial Neural Networks (ANNs) – Biologically inspired computer programs designed to simulate the way in which the human brain processes information. ANNs gather their knowledge by detecting the patterns and relationships in data and learn (or are trained) through experience, not from programming [2].

Computer Vision – The use of computer programs to extract symbolic information at the level of scene content from image or video data to emulate the capabilities of the human visual system.

Deepfake – A multimedia file that has been manipulated to include synthetic media, using machine learning and/or artificial intelligence algorithms. These manipulations may be more difficult to detect than those performed using traditional multimedia editing tools.

Deep Learning – Advanced machine learning that involves the unsupervised classification of visual images, audio, or text based on artificial neural networks, and typically involves training with very large amounts of data.

Deep Neural Network – An artificial neural network comprising more than two layers.

Digital Fingerprint – A common term for the calculated hash value of a digital file that may be used for identification, authentication, and/or integrity verification.

Faceswap – Deep learning models of two faces (A and B) are used to generate the face of one person (B) with the pose and expression of the other (A). The typical machine learning technique is an Autoencoder.

Generative Adversarial Network (GAN) – A class of machine learning systems consisting of two neural networks (NN) that compete with each other in a game. One NN uses fixed training data, and the other NN is trained to output a new data point that has the same pattern statistics as the training set but is incorrectly recognized by the first NN.

Machine Learning (ML) – A subset of AI in which computer programs and algorithms can be designed to “learn” how to complete a specified task, with increasing efficiency and effectiveness as it develops. Such programs can use past performance data to predict and improve future performance [3].

Video Analytics (VA) – Applications of computer vision that leverage information and knowledge from video data content to address a particular applied information processing need. [4]

Voice Cloning (also referred to as “Deep Voice”) – A technique for using AI to replicate a human voice. It is sometimes used in combination with deepfake face-swapping technology to produce a contrived multimedia presentation.

5. General Information

Artificial Intelligence (AI) is not a new concept. Scientists have been studying and conducting AI research in many forms since at least the 1940’s [5]. A recent report to Congress prepared by the Congressional Research Service noted that “the field of AI research began in 1956, but an explosion of interest in AI began around 2010 due to the convergence of three enabling developments: (1) the availability of “big data” sources, (2) improvements to machine learning approaches, and (3) increases in computer processing power. This growth has advanced the state of Narrow AI, which refers to algorithms that address specific problem sets like game playing, image recognition, and navigation. All current AI systems fall into the Narrow AI category. The most prevalent approach to Narrow AI is machine learning, which involves statistical algorithms that replicate human cognitive tasks by deriving their own procedures through analysis of large training data sets. During the training process, the computer system creates its own statistical model to accomplish the specified task in situations it has not previously encountered. Experts generally agree that it will be many decades before the field advances to develop General AI, which refers to systems capable of human-level intelligence across a broad range of tasks. Nevertheless, the growing power of Narrow AI algorithms has sparked a wave of commercial interest, with U.S. technology companies investing an estimated $20-$30 billion in 2016. Some studies estimate this amount will grow to as high as $126 billion by 2025” [6].

AI is now in use across many different industries, including, but not limited to agriculture, aviation, education, computer science, finance, healthcare, security, telecommunication, communications, manufacturing, transportation, and public safety. This has resulted in capability advancements in areas such as computer vision, voice recognition, speech language translation, smart robotics, medical diagnosis, autonomous vehicles, and virtual personal assistants, which are all based upon deep learning processes. In an effort to harness this refreshed technology and encourage its use to address some of the world’s greatest social and economic problems, the United Nations helped establish a foundation in 2015 called “AI for Good,” which continues to host a number of crowdsourcing projects in furtherance of these critical objectives [7].

6. Computer Vision Applications

This renewed interest and investment in ML technologies, especially in the area of advanced deep learning capabilities, has resulted in the creation of many important new and improved tools that may be used beneficially in the production, acquisition, and processing of digital multimedia evidence. Many powerful new computer vision / video analytics capabilities have become available due to recent advances in deep learning processes. As described in a 2017 report published by the U.S. Department of Commerce and the National Institute of Standards and Technology (NIST), video analytics “is a quickly emerging application area focused on automating the laborious tasks of monitoring live streams of video, streamlining video communications and storage, providing timely alerts, and making the task of searching enormous archives of video tractable” [8].

Examples of Computer Vision applications include, but are not limited to the following:

6.1. Real-time Video Analytics

Security and surveillance systems have for many years included basic video analytics in the form of motion detection to activate recordings; however, they are now becoming available with advanced computer vision capabilities that allow for automated identification of specific objects or activities. The objects for detection can include faces, numbers, animals, vehicles, weapons, or any other aggregation of pixels that forms a specified shape or color; while activities of interest may be aggressive actions or possible criminal activity (e.g., punch, kick, gun display, suspicious package left behind, or an item removed from a display then placed in a pocket or purse).

In some systems, the intelligence is embedded directly within the camera firmware. This is referred to as “edge analytics” and allows for intelligent filtering of data at the sensor itself, reducing processing requirements at the central repository and minimizing bandwidth and storage demands.

Use of these advanced systems can make it possible for many cameras to be monitored by only a few humans, increasing operational efficiency as a force multiplier. Viewers and others may be alerted to predefined activities or anomalies in expected behaviors or patterns of movement.

If properly configured, video analytics can automatically and accurately detect potential threats or activities of interest captured by the cameras. They are also not subject to the various issues commonly experienced by humans assigned to perform these mundane tasks, such as inattention to the screens and unavoidable fatigue. [9]

These same capabilities are also emerging in some in-car video recording systems and body worn cameras being used by law enforcement agencies, facilitating the automated extraction of metadata from incoming video recordings.

6.2. Expedited Review of Video Recordings

Investigators are often faced with hours or even weeks of recordings that must be carefully examined to identify persons or objects of interest. This is especially true when a major event has occurred and multimedia evidence has been acquired through canvassing or crowdsourcing efforts. Investigators responsible for this investigative review may now utilize sophisticated computer vision applications to expedite this process and accomplish the task in much less time, often with a higher level of accuracy. For example, investigators in the Boston Marathon bombing utilized such software applications, which provided them with rapid synopsis, specific object recognition, area of interest motion detection, and cross-file comparison capabilities. Recordings must be preprocessed before review. Original recordings may need to be transcoded before ingestion and processing. [10][11][12][13][14][15]

6.3. License Plate Recognition (LPR)

Law enforcement agencies across the country now utilize fixed and portable LPR systems on a daily basis to instantly recognize and compare the results against various “hotlist” databases to identify vehicles of interest to law enforcement. They can provide investigators with both real-time alerts (e.g., stolen, Amber alerts, sex offender registry, wanted subjects, etc.) and historical information relevant to criminal investigations (e.g., the location of a particular vehicle at a given date and time).

LPR systems are also used by a number of private sector entities, including vehicle repossessors, parking garage operators, corporate security officials, gated residential communities, and transportation authorities (e.g., tollways and permitted HOV lanes). While law enforcement is not generally involved in the operation of these systems, the data collected may be made available to law enforcement as needed for criminal investigations.

6.4. Facial Recognition

This has become a powerful tool for criminal investigators, who are now able to much more easily identify unknown suspects from still images, often captured as screenshots from security video recordings that have been collected as evidence. Using AI, the images are compared against those in databases of known subjects with identifying metadata.

In the past, suspects whose faces were clearly captured by security cameras could only be identified through individual recognition by law enforcement personnel, or in major cases, through a public request for assistance dissemination. Investigators may now solve many of these cases, even those considered to have gone “cold,” using these advanced AI applications.

While law enforcement agencies commonly rely upon known offender databases for comparison with images collected during criminal investigations, a significant amount of controversy has been generated about the possibility for comparison with publicly-available images (e.g., driver’s license photos, social media images, etc.) [16].

There are known examples of bias in some facial recognition software applications, specifically related to types of datasets that facial recognition software has been trained with historically (e.g., more men than women, skewed towards samples from a particular race or ethnicity) [17]. As such, any results/identification should be confirmed by a properly trained examiner before action is taken based upon software generated results [18].

6.5. Automated Redaction of Video Recordings

Software providers are implementing applications that allow for automated removal of images in video files based on the identification of specified objects (e.g., human skin, individual faces, vehicle registration numbers, etc.). This can significantly reduce the time and effort required to complete redactions. These software tools use facial and object recognition capabilities to track subjects throughout police videos (e.g., body worn cameras, in-car video recording systems, interview rooms and surveillance video) to increase efficiency for those tasked with disclosing public records. This is becoming increasingly important due to the increase of cameras being used in law enforcement to record daily activities and the proliferation of video being collected from other sources in investigations.

6.6. Automated Removal of Illicit Images and Video from the Internet

For several years, organizations have assisted law enforcement in their effort to mitigate the widespread proliferation of child pornography. Some tools have relied upon cryptographic hashing techniques to create digital signatures of images for comparison with other files that have already been classified by law enforcement as illicit. [16]. The International Centre for Missing and Exploited Children (ICMEC) provides technology tools, and training for law enforcement officers, and has even partnered with companies to help automatically remove this type of content from the internet. [19][20][21][22].

6.7. Multimedia Content Analysis and Automated Metadata Processing

AI technology advancements have made possible the automatic analysis, extraction, and generation of metadata fields based upon multimedia file content. As an example, cloud- hosted services have been created that collectively leverage a number of applications to convert files into multiple formats, recognize and log objects within a video, transcribe speech from the audio tracks, and identify words from the unstructured data using natural language processing. This type of automatic extraction of content data can reduce the time required for investigators to review large quantities of digital multimedia evidence. It may also allow an agency to build a searchable database of information from routinely ingested video files, such as those produced by in-car, body worn, security and traffic management cameras [23].

7. Challenges

As previously discussed, while the recent proliferation of AI technologies has provided law enforcement with many valuable new tools, it has at the same time presented some new challenges, as noted below, which must be appropriately addressed.

7.1. Deepfakes

Developments in synthetic media generation introduce challenges for criminal investigators and the prosecutor, who must be able to verify the authenticity of multimedia evidence recordings that are to be accepted and admitted as evidence. Deep learning can be used for artificial synthesis of video and audio data, allowing a user to train neural networks on the facial features of two people in order to swap the facial pose and expression of one person with that of another. When combined with cloned speech or the voice of an impersonator, this manipulation can effectively make it appear as if someone said something they did not. These techniques have already been used to create many “deepfake” videos on the internet of well-known personalities including the creation of non-consensual pornography, primarily of celebrity actresses. Celebrities and politicians are at particular risk of being victimized because they have many videos and images readily available on the internet [24]. An attack of this nature could result in political influence, unrest, and defamation. The threat becomes more serious when combined with voice cloning, also referred to as “deep voice,” where the voice features of an individual are modeled in a manner similar to deepfakes, allowing for the synthesis of speech which sounds like it originates from the individual portrayed in the deepfake video.

7.1.1. Generative Adversarial Network (GAN) Imagery

GAN imagery is the product of a computer trained on specific images in order to recreate realistic representations of those images. This technique has great potential for the creative industries, including video gaming and movie production, while also providing an additional layer of believability to falsified media. Current technology allows for the creation of human face and head still images. Future efforts could include other types of objects. GAN technology has the potential for widespread use in fraud and identity obfuscation.

7.1.2. Governmental Response

The federal government has identified these deepfake trends as potential serious threats to national security and is taking steps to address these concerns. As an example, the Media Forensics (MediFor) program, funded by the Defense Advanced Research Projects Agency (DARPA), was a response to the observation that relatively unskilled users can manipulate visual media for adversarial purposes, and that it is difficult to detect manipulation of visual media both visually and with current image analysis and visual media forensics tools. The program’s goal is to develop technologies that would automatically assess the integrity of an image or video file, produce an integrated platform for end-to-end media forensics evaluation, and provide details that will help facilitate decisions regarding the image or video in question [26][27][28].

7.2. Machine Learning Data Integrity

A number of efforts are underway to protect and verify the integrity of datasets being used in critical machine learning processes, such as those related to autonomous vehicles, smart robots, and medical diagnosis. If the information used to “train” these AI systems is maliciously tampered with, the results could be significant, or even catastrophic. For example, if the computer vision system used in an autonomous vehicle for recognition of traffic control devices fails to properly detect a stop sign or red light, the vehicle could improperly proceed through a controlled intersection and become involved in a collision. As an example of the work being done to address this, the Intelligence Advanced Research Projects Activity (IARPA) has a program called Trojans in Artificial Intelligence (TrojAI) intended to introduce automated alerts that are triggered by attempts to tamper with critical AI systems. [29]

7.3. Public Concerns about Privacy Intrusion and Government Misuse

There are growing public concerns about the potential for misuse of these technologies by government entities that may result in privacy intrusion, disparate treatment, or suspect misidentifications. As an example response to this perception, the City of San Francisco became the first major American city to implement a ban on the use of facial recognition technology by all city departments, as part of a comprehensive ordinance governing the acquisition or use of surveillance technologies [30][31]. Washington State is also considering passing legislation (SB 5528) that limits procurement and use of facial recognition technology. Other city governments and state legislatures are now considering similar laws that seek to prohibit the use of facial recognition software in a discriminatory way or to collect data without first obtaining affirmative consent from all end users of the technology [32]. In a recent collective effort, a coalition of activist groups representing more than 15 million people established a website seeking a complete federal ban on the use of facial recognition technology by law enforcement [33]. While it is clear that the use of facial recognition and other AI-driven technologies in government operations has been associated by some with privacy infringement, when used appropriately by competent investigators, these tools can have significant public safety value. Policymakers should be cautioned not to indiscriminately implement prohibitions on the use of emerging technologies, such as AI, that can greatly assist government organizations. There may also be privacy concerns related to the subsets of data or images used to train machine-learning algorithms. Consideration should be given to where the data was obtained from, if known, its integrity for comparison purposes, and secure storage of the information if it has been provided by law enforcement.

7.4. Responsible Use of Technology and Collected Data

According to NIST, “Artificial Intelligence (AI) is rapidly transforming our world. Remarkable surges in AI capabilities have led to a number of innovations including autonomous vehicles and connected Internet of Things devices in our homes… AI has the potential to impact nearly all aspects of our society, including our economy, but the development and use of the new technologies it brings are not without technical challenges and risks. AI must be developed in a trustworthy manner to ensure reliability, safety and accuracy” [8].

7.4.1. Policies

Before a law enforcement or other government agency makes the decision to employ any new technology, including one that involves the use of AI, specific policies must be implemented. These policies should outline the intended benefits, regulations for responsible use by employees, and protection of any data collected. It should be emphasized that any results obtained through the use of ML-based data comparisons require human verification and/or corroboration with other available evidence before legal action is taken. Misidentification of a suspect or failure to detect crucial evidence due to the use of computer vision software could materially compromise an investigation. A transparent relationship with the public should be maintained, whenever practicable, to address the mistrust and fear of privacy intrusion that may accompany the use of these technologies. Utilization of these investigative tools absent a reasonable understanding of their limitations or adherence to safeguards may also contribute to a negative public perception or rejection of their validity in the digital evidence community.

7.4.2. Data Integrity and Protection

An understanding on how an organization’s data is used by the technology is also important. Information about whether data used from the organizations results are distributed to other organizations as part of the algorithm is important to know prior to implementing a new Computer Vision application. Additionally, it should be known whether data provided by an organization, such as evidence images and video, is maintained or stored by the provider. This transparency with the manufacturer is vitally important, as efforts should be made to maintain the integrity and chain of custody for all evidence obtained.

7.4.3. Benchmarking and Validation

Any application designed for use with digital and multimedia evidence must be validated to protect the accuracy and integrity of evidentiary data. This may prove especially difficult for Computer Vision applications that adapt based on new information provided; however, details from algorithms developed for proprietary tools are unknown and require benchmarking on known reference datasets to verify uses and limitations. The validation of tools should also provide limited statistical evaluation for false positives and negatives. Versioning of the application and the algorithms should also be maintained to ensure that the results given at one point in time are able to be repeated at a future date. It may be beneficial to have access to the database of images that were used to inform the Computer Vision algorithm, as these sample sets have a strong effect on the results achieved. It is important to maintain a good relationship with the application provider to ensure successful validation can be achieved for both forensic and investigative purposes.

8. Conclusion

Computer Vision capabilities have increased significantly during the past decade, as processing and storage capacities have increased. While new public safety benefits can be achieved as a result of these advances, the list of challenges faced will also increase. Investigators should have a basic knowledge of Machine Learning and its potential impact on the digital evidence community, or they could be left unaware of potential evidence and issues related to the use of these same technologies by those with malicious intent. Additionally, this technology may allow investigators to leverage available tools to enhance efficiency in their work flows. In utilizing Computer Vision applications, investigators should be cognizant of their limitations and remain diligent in their adherence to established standards. Responsible use, operational transparency, protection of data, and documented verification of results will also continue to be critical for successful adoption and public acceptance of AI technologies as public safety tools.

9. References

[1] Congressional Research Service (2019, January 30). “Artificial Intelligence and National Security.” [Online]. https://fas.org/sgp/crs/natsec/R45178.pdf

[2] National Center for Biotechnical Information (2000, June 22). “Basic concepts of artificial neural network (ANN) modeling and its application in pharmaceutical research” [Online]. https://www.ncbi.nlm.nih.gov/pubmed/10815714

[3] Richaldo Elias (2017, November 3). “Artificial Intelligence Terminologies” [Online]. https://medium.com/machine-learning-world/artificial-intelligence-terminologies-260f1d6d299f

[4] US Department of Homeland Security (2016, June 6). First Workshop on Video Analytics in Public Safety [Online]. https://www.dhs.gov/sites/default/files/publications/First-Workshop-on-Video-Analytics_508.pdf

[5] Dartmouth College (1955, August 31). A proposal for the dartmouth summer research project on artificial intelligence [Online]. http://www- formal.stanford.edu/jmc/history/dartmouth/dartmouth.html

[6] Congressional Research Service, Artificial Intelligence and National Security, https://crsreports.congress.gov/product/pdf/R/R45178/5, page 2

[7] AI for Good Foundation (2019, June 4). “How can AI and machine learning be applied to solve some of society’s biggest challenges?” [Online]. https://ai4good.org/

[8] National Institute of Standards and Technology (NIST). (2018, September 17). Artificial intelligence [Online]. https://www.nist.gov/topics/artificial-intelligence.

[9] National Institute of Standards and Technology (NIST). (2019, November 6). Enhancing Public Safety Video Analytics with Computer Vision and Artificial Intelligence. [Online]. https://www.nist.gov/news-events/news/2019/11/enhancing-public- safety-video-analytics-computer-vision-and-artificial

[10] Griffeye (2019, June 6). “Analyze DI Pro” [Online]. https://www.griffeye.com/the- platform/analyze-di/

[11] Briefcam, Inc. (2019, June 6). “Accelerate Investigations” [Online]. https://www.briefcam.com/solutions/review-search/

[12] Vintra, Inc. (2019, June 6). “Know What the Cameras Know” [Online]. https://vintra.io/fulcrumai-investigator/

[13] GCN. (2013, April 18). How video analytics helps reconstruct Boston Marathon bombings [Online]. https://gcn.com/articles/2013/04/18/how-video-analytics- reconstruct-boston-marathon-bombings.aspx

[14] ABC News (2016, April 19). “Boston Bombing Day 2: The Improbable Story of How Authorities Found the Bombers in the Crowd” [Online]. https://abcnews.go.com/US/boston-bombing-day-improbable-story-authorities- found-bombers/story?id=38375726

[15] BBC (2019, March 4). “The New Weapon in the Fight Against Crime” [Online]. http://www.bbc.com/future/story/20190228-how-ai-is-helping-to-fight-crime

[16] American Civil Liberties Union (ACLU). (2020, May 27). ACLU V. Clearview AI. [Online]. https://www.aclu.org/cases/aclu-v-clearview-ai

[17] National Institute for Science and Technology (NIST). (2020, September). Face Recognition Vendor Test (FRVT) Ongoing. [Online]. https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt-ongoing

[18] Bureau of Justice Assistance, U.S. Department of Justice (2017, December). “Face Recognition Policy Development Template” [Online]. https://it.ojp.gov/GIST/1204/File/FINAL-Face%20Recognition%20Policy%20Development%20Template.pdf/

[19] NBC News (2018, October 24). Facebook touts use of artificial intelligence to fight child exploitation [Online]. https://www.nbcnews.com/tech/tech-news/facebook- touts-use-artificial-intelligence-fight-child-exploitation-n923906

[20] NetClean (2019, June 7). “Artificial Intelligence – The Future of Fighting Child Sexual Abuse Material” [Online]. https://www.netclean.com/technical-model-national- response/artificial-intelligence/

[21] International Centre for Missing and Exploited Children (ICMEC) (2019, June 7). “Giving law enforcement the tools it needs to fight child sexual exploitation” [Online]. https://www.icmec.org/train/law-enforcement/technology-tools/

[22] CNN Business (2013, June 17). “Google Seeks to Scrub Web of Child Porn” [Online]. https://www.cnn.com/2013/06/17/tech/web/google-child-porn/index.html

[23] Amazon Web Services (AWS) Media Analysis Solution (2019, September 8) [Online]. https://aws.amazon.com/solutions/media-analysis-solution/

[24] BBC News (2018, February 3). “Deepfakes porn has serious consequences” [Online]. https://www.bbc.com/news/technology-42912529

[25] Karras, Tero & Aila, Timo & Laine, Samuli & Lehtinen, Jaakko (2017). Progressive Growing of GANs for Improved Quality, Stability, and Variation. [Online] https://www.researchgate.net/publication/320707565_Progressive_Growing_of_G ANs_for_Improved_Quality_Stability_and_Variation

[26] CNN Business (2019, January 28). “When seeing is no longer believing: Inside the Pentagon’s race against deepfake videos” [Online]. https://www.cnn.com/interactive/2019/01/business/pentagons-race-against- deepfakes/

[27] FCW (2018, July 16). “Rubio warns on ‘deep fakes’ in disinformation campaigns”. [Online]. https://fcw.com/articles/2018/07/16/deep-fakes-rubio-warner.aspx

[28] Defense Advanced Research Projects Agency (DARPA) (2019, June 6). “Media Forensics (MediFor) [Online]. https://www.darpa.mil/program/media-forensics

[29] Intelligence Advanced Research Projects Activity (IARPA) (2019, July 16). “Trojans in Artificial Intelligence (TrojAI)” [Online]. https://www.iarpa.gov/index.php/research-programs/trojai

[30] The New York Times (2019, May 14). “San Francisco Bans facial Recognition Technology” [Online]. https://www.nytimes.com/2019/05/14/us/facial-recognition-ban-san-francisco.html

[31] City and County of San Francisco Board of Supervisors (2019, May 6). Administrative Code – Acquisition of Surveillance Technology [Online]. https://sfgov.legistar.com/View.ashx?M=F&ID=7206781&GUID=38D37061-4D87-4A94-9AB3-CB113656159A

[32] Congress.gov (2019, March 14). S. 847 – Commercial Facial Recognition Privacy Act of 2019 [Online]. https://www.congress.gov/bill/116th-congress/senate-bill/847/all-info

[33] Fox News (2019, September 5). “Activists demand facial recognition ban for law enforcement in major new push” [Online]. https://www.foxnews.com/tech/activists-demand-facial-recognition-ban-law-enforcement

10. History

Revision Issue Date Section History
1.0 DRAFT
09-17-2018
Video
Initial notes draft created.
1.0 DRAFT
09-18-2018
Video
Document edited during the SWGDE Minneapolis meeting.
1.0 DRAFT
06-06-2019
AI
Document edited during the SWGDE Denver meeting.
1.0 DRAFT
09-16-2019
Video
Document edited during the SWGDE Houston meeting.
1.0 DRAFT
09-17-2019
Video
Document edited during the SWGDE Houston meeting.
1.0 DRAFT
09-18-2019
Video
Document disseminated to SWGDE membership for comments during the Houston meeting.
1.0 DRAFT
06-03-2020
Video
Additional edits made during virtual SWGDE workshop.
1.0 DRAFT
09-14-2020
Video
Additional edits made during virtual SWGDE workshop.
1.0 DRAFT
09-17-2020
Video
Additional edits made during virtual SWGDE workshop, released for public comment.
1.0
01-14-2021
Video
Final document released for publication
Version: 1.0 (January 14, 2021)