Francesca Gino’s Best Case Against The Harvard Business School by: John A. Byrne on August 11, 2025 | 10,782 Views August 11, 2025 Copy Link Share on Facebook Share on Twitter Email Share on LinkedIn Share on WhatsApp Share on Reddit Harvard Law Professor Lawrence Lessig provides a highly thorough debunking of the fraud allegations against Gino THE DETAILED REBUTTAL OF THE ALLEGATIONS BY LAW SCHOOL PROFESSOR LAWRENCE LESSIG The most complete explanation for the data irregularities in each of the four published studies under question are given by Law Professor Lessig in his exhibit to the lawsuit. They are lengthly and highly detailed, if not in the weeds. Here’s how he puts it, in his own words: Allegation 4 Fifteen years ago, Gino and two co-authors conducted a study to measure whether a pledge of honesty would affect the honesty of the person making the pledge. There were three treatments. In all three, participants solved math problems, moved to a different room and then reported the number of problems they solved correctly. In one treatment, participants signed a pledge of honesty before they reported. In the second treatment, they signed the pledge after they reported. In the third treatment, there was no pledge. The experiment found that people were more likely to be honest if they had signed a pledge before they reported the number they had solved correctly. The data for the study was collected on paper. The reporting forms no longer exist. The Hearing Committee (HC) had two sets of data before them — call them File A and File B. It observed that (a) there were differences between File A and File B, (b) in all but one case, the differences strengthened the study’s conclusions, (c) it was implausible that anyone but Gino would have been working with File B in the hours between the time stamp on File A and the time results were written up using File B. From this, it concluded (d) that Gino must have falsified the data. Everything in this argument hangs upon File A being the final representation of the data as transcribed from the paper surveys and “cleaned” by Gino’s lab manager. If File A was simply an interim file, then there is no foundation for concluding that Gino changed anything. • File A is clearly an interim file. File A was one of three files with the same name produced by Gino’s lab manager, Jennifer Fink. Gino has no email records from this period, and no emails were provided by Gino’s co-authors. Nonetheless, the HC treated this file as if it represented a complete and accurate reporting of the survey data — meaning it reflected the data after it was transcribed from the paper surveys and cleaned by the lab manager. Yet no one — neither the lab manager nor Gino nor anyone else — testified that File A was the final file given to Gino for analysis. Indeed, in an email that transferred a predecessor to File A just days earlier, Fink plainly indicated the file needed more work. As she remarked: The people are SERIOUS dumdums on this study. They seem to be having some serious issues, calculating the money, or if they got the amounts right, they were written and scribbled in very strange ways on the form. These strong words plainly indicate more cleaning was to be done. When she sent File A three days later, that cleaning had not yet occurred. File A thus could not have reflected the necessary cleaning. Moreover, it is clear that File A does not even contain all of the entries for all of the participants. We know this because we know that the number of people paid for participating in the study is (1) different from the number of people whose data is reported in File A, and (2) consistent with the number of people whose data reported in File B. And we know this because Gino, astonishingly, has receipts for the payments made to the participants in the study. Those receipts demonstrate that the number of participants was 101. File A reports data from just 98. The receipts match the amount paid to and gender of the participants in File B. As to these receipts, the HC stated: Professor Gino claims to have reviewed the original paper receipts completed by study participants and verified that the later data (on which her analysis relied) are accurate. She did not, however, provide those receipts or explain how they account for the analysis dataset. This is a particularly glaring example of the committee’s failure to understand its own record. Gino had provided the receipts. They are in the record. The HC may have overlooked them, but they establish that File A was not the final and cleaned file as given to Gino by her lab manager. Because File A is not the final representation of the data as collected from the paper surveys and “cleaned” by the RAs, the HC had no foundation to conclude, with “clear and convincing” evidence, that Gino had changed anything. There was thus no basis to find Gino committed research misconduct. Allegation 4a Allegation 4a is the most astonishing: The conclusion of the HC is that in this study, Gino initially described and conducted one experiment, and when that experiment didn’t make sense, she changed it to describe a different experiment. The HC supports its conclusion by (a) presenting an early description, itself not written by Gino, pointing to questions co-authors raised about that description, (c) evincing the change in the description that Gino then made, and (d) finding the change was intentional. From that, the HC concludes that (e) Gino committed research misconduct. This conclusion requires believing that the study was conducted as it was originally described. Not only is there absolutely no witness support for this finding — no witness testified that the original description was correct; the only witness in the room testified it was not performed as originally described — but no one could seriously believe that Gino and her co-authors could intend to perform such an outrageously stupid experiment. The purpose of the experiment was to measure whether a pledge of honesty affected a subject’s honesty. The only conceivable way to measure such an effect is for some of the participants first to be subjected to the treatment, and then report their results. If participants reported their results before they are subjected to the treatment — as the original description suggested — then there is obviously no way that the treatment could affect the results. Yet the committee’s conclusion depends upon believing that these talented academics designed and conducted a study so flawed that even an untrained undergraduate would recognize the mistake in its design. Gino offered the committee a competing — and obviously more plausible —account of the evidence: That initial description was simply wrong. And while it took more than one iteration with her co-author for her to recognize the error in the description, once she recognized it, she corrected it. Such changes among co-authors in the drafting stage of scholarship are as common as mud. The idea that 15 years later, a Hearing Committee could rely upon such changes to find career-ending research misconduct is chilling. The HC also pointed to similar language in a draft of the IRB description, language apparently copy-and-pasted from the initial description. But IRB drafts change frequently, and there is no evidence this was the final IRB statement. Gino asked the UNC IRB office whether they still had the final description. The IRB office reported they did not. To find research misconduct here, the HC must have clear and convincing evidence that the experiment was conducted as originally described. Not only is there no clear and convincing evidence it was, there is literally no evidence beyond the description. Nor is there any reason to suppose that these talented academics would have been so stupid in designing their study. Instead, the only plausible interpretation of the edit to this paper’s description is that the original description was an error. Correcting a drafting error is not academic misconduct. • How the investigative errors contributed to these substantive errors The substantive errors of the Hearing Committee were precipitated by failures in the investigation. The committee prosecuted an alleged 15-year-old fraud, apparently without recognizing just how fragmentary the record would be. Not only did the committee not have the original data from the study, it did not have the full progression of data files between that original data and the cleaned data that Gino worked with. The file they treated as the final file was plainly not. The failure to collect the records of the others involved in the research meant that the committee only had a few interim versions of the data. This incomplete record provides no foundation for treating the interim File A as the final version allegedly modified by Gino. Allegation 3 The (chronologically) second charge against Gino involves a paper measuring the effect of lying on creativity: Does lying make a participant more creative? The paper found a positive correlation between dishonesty and creativity. The HC concluded Gino had changed the data, finding “multiple types of changes that uniformly supported the study’s hypothesis.” This claim is flawed for a number of reasons. First, and fundamentally: Contrary to the HC’s claim, the “multiple types of changes” did not “uniformly support the study’s hypothesis.” The record demonstrated that 54% of the anomalies did not support the study’s hypothesis. Some were wholly immaterial to the hypothesis, but included in the publication because the norm within Gino’s discipline is to publish anything measured. Some were changes in variables that Gino did not even analyze. Thus, rather than cement a theory of motive, these facts press a question the HC simply ignored: Why would Gino introduce changes to the data that had no effect on the results? Second, the data at issue was processed over at least 248 days. None have disputed that Gino’s RAs and other assistants typically did such processing. And as the Qualtrics data for this study was not found in Gino’s account, it must have been gathered and processed by an RA or a lab manager — not by Gino. Yet HBS interviewed no one except Gino and her co-author about this allegation. Nor did it gather evidence from anyone except Gino. This incomplete investigation thus led the IC to focus on just two files in a progression that had many more in the interim. This failure to investigate is especially consequential with this allegation be- cause, as determined by the HC, the core finding of guilt depended upon the nature of a virtual coin flip. HBS’s new expert, Freese, claimed that the code for that coin flip must have been “rigged.” From that assumption, he purports to deduce the anomalies that Gino must have created. Yet no one at any point in the 37-month long investigation before Freese arrived had ever suggested the coin flip was rigged. Data Colada had not so alleged, Maidstone did not so allege, and the IC did not so find. Had anyone raised this suggestion at the start of this investigation, it could easily have been resolved by talking to the computer programmer who wrote the code that performed the coin flip. Yet HBS never spoke to him, never asked for his code, and never asked for copies of his email or other relevant records. Only after HBS had declared Gino’s guilt was Gino able to approach him. At that point, he was understandably unwilling to be drawn into the fight. Gino testified the coin flip was not rigged. Ethical rules governing HBS labs require deception be declared to the IRB; no such deception was declared. The same rules also require participants be debriefed if deception was used; no participant in this study was debriefed. Yet the HC permitted Freese to insert this claim without any demand that HBS produce the best evidence of such rigging — the code from the coin flip. It was based on Freese’s rigged coin flip theory that the HC found “data for 12 participants … were changed….” Freese based his conclusion on differences between two columns of data for the participants. Because he assumed the columns were reporting the same data, he inferred the differences must have been fraudulent. But these two columns of data did not report the same data. That is why there were two columns. Not only were the two columns intended to report different data, but they were also titled differently — one titled “reported_guessed_correctly”; the other titled “cheat.” There was thus no basis for concluding — certainly by “clear and convincing” evidence — that two differently titled columns were intended to gather the same data, and therefore, no basis for believing that differences between them is evidence of misconduct. The HC concluded that “four overall scores were altered” “for one of the creativity tasks,” and that “the text responses provided by seven participants were changed to make them appear more creative.” Yet it was uncontested that Gino and co-authors worked back and forth with the RAs processing the data for almost a year. At each stage of this process of cleaning the data and analysis for this paper, different people worked with the data. And as the earliest file in the record already includes coding, it is clear the committee didn’t even have the original file reporting the initial data responses. Yet HBS interviewed none of the people who worked with these files, nor gathered from them any of the files necessary to complete the record. Having access to their records could have allowed Gino to show that the changes the HC discussed were in fact the results of coding errors as Gino explained. The HC also concluded that “[s]ix of those seven alterations were made by swapping the text of a more creative participant who had not cheated with the text of a less creative participant who had.” (A652). But those changes weakened the t-statistics for the results. The HC offered no account of why Gino would intentionally weaken her own study by modifying its data. In October 2013, Gino and her co-author received a revise-and-resubmit decision from the journal Psychological Science. However, missing from the HC’s account is one critical fact: The journal asked the authors to code participants’ responses for one of the tasks used to assess their creativity on two additional dimensions. So, yes, those data were changed, but only to respond to the journal’s request. To respond to this request, Gino enlisted new RAs to re-code the data. During this process, an apparent data error produced a revised dataset with the new RA’s ratings imported from the wrong rows or tabs of the Excel file with the coding. But contrary to the HC’s claims, this error reduced the statistical significance of the findings, again negating any suggestion that Gino had any motive to make this change. Finally, the HC relies upon Gino’s failure to respond immediately to an in-quiry from one of the Data Colada authors as evidence that she intended to hide her manipulation of the data. When that data was ultimately shared, the HC suggested it was manipulated. But Gino had testified that a file to be shared with other academics would have been prepared by her RAs. She testified that RAs routinely accessed her office and used her laptop to complete their work for her. Yet none of those RAs were interviewed and the records of none were gathered by HBS. Gino asked HBS to provide door-access data to show who was in the office during the relevant period; that request was denied. She likewise requested phone use data to substantiate her report of RAs using her office. That request too was denied. In short, investigatory efforts to show innocent reasons for data anomalies were rejected by HBS. There are no errors at Harvard, only sinners. When asked about the difference between the two files, the HC quoted Gino as saying: If you want to make sure that everything is accurate in the datasets and the sums are not there, you might change it so that it’s consistent. The HC then used that statement to support this libel: The actions in this instance were antithetical to science: instead of fixing the sums to reflect the actual data and contacting the journal to correct the scientific record, Professor Gino manufactured the data to support the sums. (A653). The HC has mischaracterized what Gino said. It assumed Gino was describing her own behavior; in fact, she was describing how someone cleaning the files would have worked. Gino repeatedly testified that she did not clean or prepare data files. RAs did. The “you” in “if you want to make sure that everything is accurate,” referred to the RAs, not Gino. • How the procedural errors contributed to these substantive errors The failure to investigate any of the participates in this 248-day process of gathering and cleaning the data files profoundly affected the substantive result. This is especially true with the core claim by Freese that the virtual coin flip was “rigged.” Had HBS simply asked the programmer for the code that generated that coin flip, we would know that Gino was right — that it was not rigged — and that Freese was wrong. Yet again, HBS denied Gino the meaningful opportunity to adduce evidence suggesting an innocent reason for an anomaly. Allegation 2 Allegation 2 involves a study measuring the effects of being forced to engage in inauthentic behavior — specifically, writing an essay against your own views. The measured effect was whether inauthentic behavior made a person more likely to select cleaning products from a list of products provided to them after the intervention. The data supported the conclusion that it did. The HC found 154 changes between the source data (call it “File A”) and the data that was used to conduct the analysis (call it “File B”). It found that “[a]ll of the alterations are in the direction of the paper’s hypotheses.” It found Gino responsible for the changes. First, and again, “all of the alterations are” not “in the direction of the paper’s hypothesis.” Specifically, 48% of the changes did not strengthen the paper’s conclusions. Again, why would Gino make 80 changes that did not strengthen the paper’s conclusions? More fundamentally, the HC has missed the significance of the evidence Gino adduced to rebut the suggestion of misconduct. Properly understood, that evidence makes it practically impossible to believe that these anomalies were intentionally introduced by anyone. It is uncontested that File A was created by merging two separate files. Those two files are in the record. If those two files are mergedand cleaned correctly, they produce File A. The IC asked Gino how those anomalies could have been generated with the ordinary cleaning process. Working with an Excel expert, Gino discovered how: If the two predecessor files are merged and cleaned in a plausible way, then two features (or bugs, depending on your perspective) of Excel would produce File B precisely. These Excel features are (1) the way Excel swaps blocks of cells (“the swap feature”), and (2) the way it copies blocks of cells (“the copy feature”). Specifically, if a user selects a range of cells, and, while pressing the shift key, drags it to a different position, the cells are swapped without warning. Likewise, if a user selects a range of cells, and drags it while pressing the control key, the cells are copied to the new location, with data overridden without warning. Microsoft’s user manual warns users about these features, and how they might produce unintended results (“Excel doesn’t warn you if…”). Obviously most are unaware of these behaviors. Gino demonstrated that if, after merging the two files, an RA had sorted them, and reordered the columns in a logical way, and then performed one swap command and two copy commands (inadvertently or otherwise incorrectly), File B exactly would have been produced. She offered this scenario as a plausible way to account for File B, without assuming anyone engaged in any misconduct. Call this Possibility A. On this account, the anomalies in File B would have been the product of plausible errors in the data cleaning process — again, processes that would have been completed by RAs. The HC rejected this possibility, finding instead that Gino had manipulated the values in File B to produce the anomalies observed. On their account, Gino flipped the values across a wide range of cells to produce the anomalies that are the predicate to this allegation. Call this Possibility B. Yet properly understood, Gino’s discovery radically weakens the probability that anyone could have created the anomalies through an ad hoc modification of 80 cells out of the 2,455 cells that could have been changed to strengthen the result. Because the question the committee should have asked was this: “What is the chance that Gino could have intentionally changed precisely the same values that would have been changed by the plausible cleaning errors?” Or to put it differently: If indeed she had been tinkering with the data to strengthen the results, what is the chance that the set of values she changed would be the same as the changes produced by the cleaning error? Consider just one step in the process that Gino described to see the point: The swap error. Gino demonstrated that if, while pressing the shift key, a user highlights a certain block of 8 rows and 9 columns in File A, and then drags that block straight down, the cells are swapped with the corresponding values in the next 8 rows. That means the values in row 1 are swapped with the values in row 9, row 2 with row 10, row 3 with row 11, and so forth. There is no warning about the swap. It just happens. Gino showed that if you take File A, and perform this step, you produced 144 of the anomalies, with only 80 relevant to the results. (The balance would be accounted for by the two other copy errors). In the face of this demonstration, the committee should have asked: How likely is it that Gino would have selected the same cells in the same pattern to produce the same 80 — or more unlikely, 144 — anomalies? If Gino was simply randomly changing those values, she could have selected any 144 of those the 2,455 potentially strengthening cells. What are the chances that she would have picked these 144, exactly? The answer is almost zero. Or put more formally, given Possibility A (a plausible process that produces the 154 anomalies exactly), the chance of Possibility B (an intentional process manipulating the same 154 values) is effectively zero. No one could have randomly selected (among the 2,455 target cells) precisely the same cells to change as the 154 that were changed. Of course, this argument rests on the assumption that File B was generated in the plausible way Gino described. Gino testified that the process she described matched the likely steps that RAs would have taken for merging the files and cleaning the resulting data. To be certain, the committee would have to know how the RAs cleaned the merged file. Here again, however, that evidence is not available because HBS took no steps to interview the RAs involved in generating File B. The RIO failed to gather or sequester any data from any computer other than from Gino’s, or from any Qualtrics account other than Gino’s. Thus, the evidence necessary to know with certainty that the flawed procedure was, in fact, the procedure used is not in this case because of HBS’s incomplete investigation. • How the investigative errors contributed to these substantive errors Gino’s defense rested upon her demonstration that the anomalies identified in this allegation could plausibly have been produced by the cleaning errors she and her expert had discovered, and the statistical certainty they couldn’t have been produced by random manipulation alone. While that discovery alone should resolve this allegation, the failure of HBS to interview any of the RAs who would have performed that cleaning destroyed Gino’s opportunity to buttress this argument with their testimony. Once again, HBS failed to gather exonerating evidence, leaving incriminating evidence only. Allegation 1 Allegation 1 — the only allegation within the 6-year limitation, and therefore, from my perspective, the only allegation you should be reviewing — focused on a paper testing the relationship between focus when networking and purity. The HC found 1,066 changed values in the data. It found Gino responsible for those changes. “All of the alterations,” the HC stated, “are in the direction of the study’s hypothesis, and without them, the data would not have supported that hypothesis.” Once again, this finding is flatly false. 39% of the anomalies were not used to test the hypothesis. Once again, why would Gino change 415 values for no purpose at all. The HC identified the 1,066 changed values by comparing the output of a Qualtrics data set that recorded the participant responses to the dataset used in the final analysis. The committee speculated that [o]n a single day — January 24, 2020 — Professor Gino opened the original, unaltered Qualtrics dataset in the afternoon and then hours later saved the altered dataset used in the final analysis. More specifically, the committee is claiming that Gino (a) opened the Qualtrics file, (b) cleaned and processed the raw data, and then (c) fraudulently manipulated the data to ultimately produce the dataset that ran the final analysis. Yet as Gino testified repeatedly, her practice was never to clean raw data from an experiment. That complex, error-prone work is done by RAs, and certainly would have been done by RAs in this case. No doubt, she could well have opened a Qualtrics file to check the number of participants in a study (a value needed when drafting a manuscript). But there is no evidence that she deviated from her ordinary practice in this case and spent the day doing the work her RAs were hired to do. Nonetheless, having first conjured the image of Gino demoting herself to RA, the HC then imagined “she analyzed these two files over the course of the afternoon using SPSS, running commands in a manner consistent with repeatedly altering the data and then checking whether and how it improved the results.” This finding is pure speculation, flatly refuted by the actual logs created by that SPSS session. When examined closely, those logs demonstrate she ran many commands during the session across four different datasets supporting three separate studies. They do not demonstrate that she ran a single command repeatedly on a single study. Any duplicate commands were caused by switching files analyzing filtered subsets of the data, closing and reopening SPSS, and the efficiency of re-running a command rather than scrolling up a log to find it. The HC’s assumption is thus flatly refuted by the objective records of the SPSS log file. The HC rejected an alternative hypothesis — that any differences between the Qualtrics data and the data ultimately analyzed were produced in the ordinary process of cleaning the data — because it believed (1) that there was “no evidence that a research assistant was working with the data at this late stage,” and (2) that no RA “had the knowledge, motive, or opportunity to alter over a thousand data points in the direction of the study’s hypothesis.” Regarding (2), again, the anomalies are not all “in the direction of the study’s hypothesis.” 39% — 415 out of the 1,066 anomalies — did not strengthen the conclusion at all. But as to (1), the HC’s finding is missing an obvious point: if an RA in fact did what Gino always had her RAs do — specifically, clean the data for her to then analyze — an RA would have done that cleaning before January 24. And indeed, HBS had direct testimony from one of Gino RAs, Alex Rohe, that he cleaned the file from the Qualtrics data before January 24. Rohe expressly asked HBS to check his email because he was unsure what happened to the file that he had created. HBS apparently neither checked his email nor sequestered any of his files — once again, failing to investigate potentially exonerating leads. This testimony thus directly conflicts with the assumption of the HC that on a single day, Gino downloaded the original data from Qualtrics, cleaned and processed it, and then analyzed it. Her RA testified he had cleaned it. That work would have been before January 24. Gino also testified, as the HC writes, that there was on her computer “a version of the data file with ‘R’ in the name [that] matches the last initial of a research assistant,” Alex Rohe. (The file does not survive.) But rather than credit that this file — and not the unprocessed Qualtrics data — could have been the source of the data analyzed on January 24, the HC rejected the suggestion because, as it wrote, there was “another ‘R file’ on her computer at a time when that individual was not working with her.” Again, the HC does not know its own record. Gino had testified that at the time that file was created, another RA, Mindi Rock, was working for Gino. So if, indeed, Gino marked files in a way that indicated which RA had worked on them, the presence of a “version of the data file with ‘R’ in the name” was plainly more likely to be the source of the data analyzed on January 24 tha data produced from the tedious task of cleaning raw Qualtrics output. The HC’s conclusions are further weakened by the rare fact that in this in-stance, both HBS’s forensic expert and Gino’s believed that the data Gino worked with in the afternoon of the 24th was data that she had copied and pasted from some other file — and specifically not, as the HC concluded, the Qualtrics data. Nonetheless, the HC ignores these findings by both experts, substituting its own “falsification scenario” that Gino relied on the raw Qualtrics output. Yet if one instead accepts the testimony of both experts — that she copied the data she analyzed from another file and pasted it into the file that SPSS used — the question becomes, from where did she copy the data? It could have been the Rohe file. If the source was not Rohe, Gino testified that her ordinary practice was to transport datasets on USB thumb drives. But here, the incompetence of the HBS investigation destroyed the opportunity to learn from where she got the data. As I’ve described, the RIO stated at the start of the investigation that “a ‘forensic copy’ was taken” of Gino’s computer. That image would have revealed whether a thumb drive was inserted into her computer on January 24, the serial number of the drive, and at what time it was accessed. The computer’s system logs could have revealed what files were opened and when. The RIO apparently didn’t understand the difference between taking an image of the computer and simply copying some files from the computer to a different disk. Because in fact, no image of the computer was taken. By the time an image was taken — almost 2.5 years later — these logs had been overwritten. Incompetence thus destroyed the opportunity to discove from where Gino “copy and pasted” the data on January 24. Thus, the HC had literally no evidence that Gino “cleaned” the Qualtrics data on January 24. Both experts reject that possibility, testifying instead that she copied the data from another file, such as the Rohe file. We don’t have Rohe’s file (HBS did not gather the files from his email as he requested) and we have no way to know for sure which file Gino copied the data from on January 24 (because the RIO failed to take a forensic image of Gino’s computer at the start of the investigation). But in the face of this testimony, there is no foundation for the HC’s conclusion that Gino had herself created the 1,066 anomalies, 415 of which do not strengthen her hypothesis. Finally, the committee found that the fact that “Gino had already drafted a description of the study results as … indicating that she altered the data to conform to her pre-written draft.” This wildly speculative leap is completely unsupported in the record, and goes beyond anything HBS had ever alleged. It betrays, moreover, ignorance of the practice within Gino’s field. In crafting research projects, authors in Gino’s field often draft summaries of potential findings to evaluate whether a proposed project could yield useful and publishable findings. This is especially valuable when a particular paper may include multiple studies. And frankly, it is prudent: There’s no point conducting a study that, if successful, would yield a finding that gets a collective shrug. Yet the committee drew its conclusion from an undeveloped record — undeveloped because never had anyone ever suggested that this practice suggested research misconduct. • How the investigative errors contributed to these substantive errors In failing to capture a forensic image of Gino’s computer when the investigation began, the RIO blocked an opportunity to demonstrate from where she got the data. Both experts said it was “copy-and-pasted” from some file. Had we had a forensic image, we could have said which. Likewise, HBS failed to investigate RA Alex Rohe’s assertion that he, rather than Gino, had cleaned the Qualtrics data. This denied Gino the opportunity to use his evidence to demonstrate that she did not clean the Qualtrics data on January 24 and thus did not modify the results. As with every other allegation, HBS failed to investigate where the evidence could have been exculpatory, and failed to give Gino herself a meaningful opportunity to investigate herself, by blocking her until HBS had declared her guilty. Conclusion It is easy to believe Francesca Gino is guilty. The story almost writes itself — a prominent fraud researcher herself charged with fraud. No one denies there were anomalies; how else could such anomalies be explained, except by pointing to the one person who would benefit from them? The public can therefore be forgiven for believing Gino guilty. But it was HBS’s obligation to go beyond a presumption. It was its burden to investigate fairly the other plausible reasons for such anomalies, or to allow from the start Gino to do so herself. Instead, HBS conducted an investigation practically designed to confirm what anyone would presume: It failed systematically to investigate the other plausible reasons why anomalies could have been produced. It blocked Gino from any meaningful opportunity to investigate those reasons herself. And it failed to preserve the critical evidence that would have resolved with confidence who-did-what-when for the one allegation that is within the 6-year limitation. Previous PagePage 2 of 2 1 2 © Copyright 2025 Poets & Quants. All rights reserved. This article may not be republished, rewritten or otherwise distributed without written permission. To reprint or license this article or any content from Poets & Quants, please submit your request HERE.