Award-winning legal video producer Andrew Colton has publicly rejected a law firm's request to use artificial intelligence to make injuries appear more severe in legal documentation videos. Colton, who produces personal injury "day in the life" videos and settlement documentaries for over 200 attorneys nationwide, stated that AI has no place in legal video production when it compromises credibility.
The incident occurred when a law firm asked Colton to utilize AI technology to enhance the appearance of injuries in legal documentation. Colton refused the request, even at the risk of receiving negative feedback on attorney communication platforms. "There's nothing more important than credibility," Colton emphasized. "Even if it leads to an unhappy law firm."
Colton's position highlights growing ethical concerns about artificial intelligence applications in legal proceedings. His company, Colton Legal Media, produces videos documenting injuries for settlement negotiations and courtroom presentations in cases involving personal injury, traumatic brain injury, wrongful death, truck accidents, and medical malpractice. These productions are designed to credibly document injuries to facilitate appropriate settlements or judgments.
The producer distinguishes between communication professionals like himself and technically trained legal videographers who might be more willing to employ AI manipulation. Colton warns that hiring CLVS legal videographers for sensitive personal injury documentation could be problematic, as these professionals typically focus on deposition recording rather than the intimate documentation required for injury cases.
Colton's stance represents a broader conversation about ethical boundaries in legal technology adoption. As artificial intelligence becomes more sophisticated and accessible, legal professionals face increasing pressure to utilize these tools for competitive advantage. However, Colton argues that credibility must remain paramount in legal documentation, particularly when documenting sensitive personal moments like medical procedures or injury aftermath.
The implications extend beyond individual cases to the legal industry's relationship with emerging technologies. Colton's public refusal serves as a cautionary example for attorneys considering AI applications in evidence presentation. It raises questions about professional standards, evidentiary integrity, and the ethical responsibilities of legal service providers in an increasingly digital landscape.
For business and technology leaders, this incident illustrates the complex intersection of innovation and ethics in professional services. While AI offers numerous efficiency benefits across industries, its application in legal evidence documentation requires careful consideration of ethical boundaries and professional standards. The legal industry's approach to these challenges may establish precedents for other sectors grappling with similar ethical dilemmas in technology adoption.


