Articles on: FAQs

Does Leantime use an LLM and how does Leantime manage the impacts of AI?

Using AI in Leantime: LLM and the Ethical Considerations



We currently use a combination of fine-tuning, embeddings and Anthropic Claude Sonnet 3.5 as a foundational LLM. We specifically use Claude as we've seen Anthropic adhere to and commit to higher standards of ethical AI usage, both from a business perspective and from our testing. Anthropic has been seen as more intentional in their modeling and calling for regulations rather than deferring them in many cases.

Anthropic's Responsibility to Scale Policy
Anthropic's Position on Third Party Testing & Self Governance
Anthropic's Policy Recommendations to the Government
Claude's Constitutional AI
Anthropic's Economic Index

Addressing Bias in AI



Even with the best intentions, due to the nature of bias, we believe that AI is built towards being biased in response. It is a functional flaw of the material in which AI learns (us). For this reason, when we develop on top of this, we take extra care to assess for additional points of bias.

There is significant amounts of research out there in regards to bias around names (whether female or cultural in origin) -- where humans will unintentionally exhibit bias towards a name, either on resume, job opportunity. This was seen in several Stanford and other studies to also be occurring with Chat-GPT models and others. This article highlights results from a more recent paper (2025) on these biases from Stanford Law School. to also be occurring with Chat-GPT models and others.

While we do not use the LLMs mentioned in the studies, we recognize that the risk remains. In Leantime, we see this as a risk to the features we have. One of our AI based features is AI status reports -- where users can update their tasks from their work dashboards and AI will format a streamlined update and then email it out to the project stakeholders.

If, as developers, we include the names of the users to the AI in our prompts -- what is the impact if AI takes the status update of some names and overemphasizes their results while under emphasizing the results of others? Or negatively spins one vs the other?

We also had a similar concern regarding our Story Time feature (using AI to create personalized impact stories of the work you're assigned on a project). To limit this risk, we are more intentional in assessing our prompts and the data points that we send over to the AI.

How we use AI



For our embeddings and fine tuning, we are specifically focused on the human side of motivation, including our internal and proprietary approaches (specifically to neurodivergent motivation).

We are also focused on using AI smartly, where it is an added benefit and in ways that are not traditionally covered by other tools. We view AI as a human partner instead of as a human override. It should be enabling us to do more, do better and not replace the things that we, as people, were meant to bring to the table.

Environmental Impacts of AI



Lastly, we recognize that the environmental impact is harder to reconcile. We do what we can by using hosting providers that have sustainability measures and initiatives in place. There is some research coming out that argues in favor of LLMs costing "less" environmentally than light human labor but this then goes back to the overall economic impact -- which is not small either. At this time, we are making the most balanced choices that we can when we look at the whole (bias, integrity, economics, ethics, and environment).


Share your thoughts



We would love to hear your thoughts on this. We are always learning but we recognize that we are human and may not be aware of certain elements and things change quickly in AI. Please don't hesitate to reach out and let us know your perspective.

Updated on: 25/03/2025

Was this article helpful?

Share your feedback

Cancel

Thank you!