Data Sharing Is Where Good Intentions Get Stress Tested
- Maira Elahi

- 1 day ago
- 7 min read

When data sharing is described as ‘openness’ or ‘collaboration’, it sounds good, but the reality is much more chaotic. You have to accept that you have no control over how a data set is understood, combined, reused or misinterpreted once it has been published by the collection team. This is where ethics comes in, not as a feeling, but as a practical examination of the damage. If this information is misinterpreted, disclosed or associated with something else, who could be affected? If something goes wrong, who bears the consequences and who benefits from reuse? Commercial research data management platforms, especially those offered as Software as a Service (SaaS), have embarked on the difficult path between these philosophical and operational questions. In this case, one either tacitly relies on everyone behaving impeccably forever, or one builds safeguards into the workflow.
What Regulations Actually Require, in Plain Terms
If you are new to privacy law, it can feel like a wall of formal language. The core requirements are simpler than they look. Regulations like GDPR, PIPEDA, and PHIPA/HIPAA are basically asking institutions to do three things consistently. First, be clear about why you have data and do not use it for unrelated purposes later. Second, collect and keep only what you truly need, and protect it like it matters. Third, be able to prove you did the first two, even years later, when the original team has moved on. That last part is what people miss. Compliance is not just behaving well. It is being able to demonstrate, with evidence, that your process was defensible at each decision point.
A commercial RDM SaaS platform becomes useful because it is built to turn those expectations into default behavior. Instead of a department hoping that everyone reads the policy PDF, the system can require that a dataset is tagged with an intended use, a sensitivity level, and a retention plan before it is shareable. It can prevent accidental over sharing by making certain actions impossible without approvals. It can make the compliance burden feel less like paperwork and more like a set of normal steps you cannot skip, the same way you cannot submit an expense report without a receipt.
Metadata Sounds Boring Until You Need It
The common definition of metadata is "data about data", which is accurate but not particularly helpful. To put it another way, a data store's metadata contains its history. This is how another person understands the true meaning of that information, the assumptions that shape it, the permissions granted, the boundaries set, and what cannot be done with it. Without this history, data can easily be presented incorrectly. It may be misinterpreted as generic knowledge if it is taken out of context or put in a different situation.
Commercial RDM SaaS platforms can force the story to travel with the files. When metadata fields are structured and mandatory, the platform can treat them as rules, not decorations. A dataset labeled as involving human participants is not just described differently. It is handled differently. It can trigger stronger authentication, stricter permissions, additional review steps, or limits on exporting. This is where ethics stops being an afterthought and turns into configuration. You are not asking users to “remember to be careful”. You are building carefulness into the rails of the system.
Sharing Does Not Have to Mean Handing Over Copies
Many individuals only consider emailing someone a spreadsheet or a USB drive when they think of exchanging data. Problems arise from this procedure because each duplicate of the data in question raises the possibility of abuse or unintentional disclosure. It makes it challenging to adhere to rules since it is unable to provide a reliable response to fundamental queries like who possesses the most recent version, where it is kept, any risks related to loss, or if it has been shared with anyone else.
A more modern pattern is controlled access where the data stays put and the analysis comes to it. Commercial RDM SaaS platforms can support this by offering secure workspaces where approved researchers can run analyses without pulling the raw dataset onto personal devices. For a novice, the key shift is conceptual. Instead of thinking “Do we give them the data", you start thinking “What do we allow them to do with the data, under what conditions, and with what evidence trail". This is a calmer way to share sensitive information because it narrows the blast radius if something goes wrong. It also aligns more naturally with cross border and sector specific rules because you can avoid exporting data into environments you cannot control.
Audit Trails Are Not About Distrust

People might often, at first glance, react to audit logs like they are surveillance, but in governance, they are closer to lab notebooks. An audit trail is simply a reliable record of who accessed what, when they accessed it, and essentially what they did. That matters because institutions are accountable over long timelines. Students graduate, research assistants move on, principal investigators change, and yet the legal and ethical responsibility remains. When a regulator, a funder, a participant, or an ethics board asks what happened to a dataset, “we think it was handled properly” is not an acceptable answer. You need a record.
Commercial RDM SaaS platforms make that record automatic. Access events, changes to files, downloads, and permission updates become logged as a normal part of the system. If a breach occurs, those logs support investigation and reporting. If nothing bad happens, they still serve a quieter purpose: they create a credible environment where ethical commitments are verifiable. That credibility matters in research partnerships because trust is easier to maintain when governance is legible. Once you use a commercial SaaS platform, a third party is involved in your data lifecycle. That does not automatically make things unsafe, but it does make the governance picture more complicated. Institutions sometimes talk as if buying a platform is outsourcing responsibility. It is not. The institution still owns the ethical and legal risk. The vendor supplies the infrastructure, but the institution is still the one who is accountable for how data is used, who can access it, the conducts, and whether obligations were met.
This is why vendor due diligence is not bureaucratic theater. It is where you answer basic but serious questions. Where is the data stored, and does that create cross border issues? Who are the subcontractors that might touch the infrastructure? What encryption is used, and how are keys managed? What happens if you terminate the contract, and can you actually export your data cleanly and delete it fully? A good RDM SaaS product helps you comply, but a bad contract can quietly undermine that.
Interoperability Is Quietly Ethical
Interoperability sounds like an engineering preference until you see how lock in works. If a platform makes it painful to export data or forces you into proprietary formats, you create a structural dependency. That can become an equity problem because well funded institutions can pay to integrate systems while smaller partners cannot. It also becomes a governance problem because portability is part of control. If you cannot move your data easily, your negotiating position weakens, and long term stewardship gets harder.
Commercial RDM SaaS platforms that support standard identifiers, common metadata schemas, and clean export paths are doing something ethically important even if they market it as a feature. They reduce concentration of control and make collaboration more realistic across institutions with different resources. In practice, interoperability is one of the simplest signals that a platform is designed for the research ecosystem rather than only for customer retention.
Compliance Is Not Static, So the System Cannot Be Static
Even if you never change your internal policies and update them to the current environment, it's very possible that at the same time, the external environment changes. New guidance might emerge, laws could evolve, as well as changing expectations around AI, automated decision making, and tightening cross-border data access. Static governance setups fail slowly, then suddenly. You discover you have been doing something noncompliant for months because your controls did not adapt to a new rule or a revised interpretation. This is where commercial SaaS can be an advantage if it is designed well. Policy engines that can be updated, classification rules that can be re-applied, and monitoring that can flag risky patterns help institutions stay aligned over time. The goal is not a perfect prediction. It is reducing the chance that governance drifts away from reality while everyone assumes things are fine.
What Commercial RDM SaaS Really Contributes
The true value proposition of commercial RDM SaaS, when the branding is removed, is this: it transforms moral commitments and legal obligations into replicable processes. It provides a space where permission requirements may be recorded as legally binding restrictions, where access can be graded rather than all or nothing, where responsibility is automatically recorded, and where cooperation can take place without creating multiple copies of private information. It does not remove the necessity of institutional ownership, ethics review, or judgment. With careful implementation, it can lessen the instances in which governance relies on someone remembering the correct thing under time constraint.
These days, research is dispersed over organizations, legal systems, cloud computing settings, and regulatory frameworks. Grant applications are the first step in a project, which then goes through ethics review, gathers sensitive PHI and PII, creates datasets, distributes drafts under an NDA, and finally publishes or archives the results. The risk profile changes at each step. Often, consistency is lacking. When the system that controls team access is isolated from the data storage system, the ethics approvals system, the audit activity log system, and the storage system, data governance fails. In those gaps, ethical transgressions are common. Those seams are lessened by a commercial RDM SaaS platform that supports the whole lifecycle. myLaminin fits into that category as a connecting infrastructure for research administration, data management, and compliance rather than as a storage tool.
SOURCES:
__________________________________

Maira Elahi (article author) is a myLaminin intern and an Accountant student in the Ivey AEO program at Western University.




