WEEKLY HIGHLIGHTS 2025 SECOND HALF

For newer posts click HERE
 
HIGHLIGHTS FOR WEEK OF 29 DECEMBER – 4 JANUARY 2026
 
 
Radio Atlantic podcast “He’s Undocumented. She’s Not” and Turning 60
 
I recently listened to a podcast by the Atlantic, which literally brought me close to tears. The storyline sounded like a movie script, yet it was real.
 
It was the story of a couple living in Chicago. The woman is a US citizen, while the husband is undocumented.
The husband, together with his mom, immigrated illegally from Poland into the US when he was about 6 years old. Hence, he grew up like a normal kid in the US, but with a Polish passport and an illegal residency status, not through any fault of his own. Because he entered the US illegally as a child, he never had a chance to obtain US residency or citizen status.
 
This naturally had major consequences for his life. Not only could he not travel out of and back into the country, he also constantly had to worry about being found out when being stopped by the police for simple things like driving too fast.
 
What then followed was the moving story of how the couple gradually recognised that their only possible path to a normal life was emigrating to Poland.
 
What made deciding to move to Poland so difficult were the consequences of this decision. The husband knew that by taking this step, he would burn all bridges and could never come back to live in what had become his home country. The wive would be starting a new life in a country about which she did knew nothing, in which she had no friends and did not speak the language.
 
Starting anew in life is scary. But at the same time, building a new life seems exciting.
 
Overcoming difficulties often requires boldness, where we leave what we have behind, in exchange for the prospect of finding a better life. This is usually hard, because we know what we lose, but cannot imagine what we might gain and what our life under the new circumstances will be like. As a result, we tend to disproportionally overestimate the value of what we have.
 
Naturally, there are no guarantees that changing a specific aspect in our life will help us overcome our difficulties. Yet, in most cases, we end up much better off, at least subjectively, because our brain is able to adapt to new circumstances much better than we think. We can often tolerate things that we imagine as intolerable. Moreover, what I have discovered time and time again is that after abandoning something that seemed important, it becomes much less important.
 
This brings me to a major turning point that I will face in a few years and that I am reminded of this week while turning 60. In a few years, I will reach my retirement age.
 
The first thing that comes to mind is that the thought of reaching my official retirement soon seems surreal because I do not feel significantly different from how I felt twenty or thirty years ago. I now need glasses to read and I run a bit slower than when I was young. But I can still do experiments (and often better than my students), and I feel that I am better at most other things than I used to.
 
At the same time, I strongly agree that from a societal and university point of view, retirement is a good thing. There are so many new and talented, and often more capable, young scientists who have or want to take on roles to contribute to research and make a difference to our society. Senior professors like me should not stand in their way.
 
What is more, I believe that retirement is also a good thing from a personal view.
 
I used to dread this time in my life because I was afraid of no longer feeling needed. But now I look forward, not despite, but because of the uncertainty of what this will bring. I look forward to changing my focus in life and spending my days entirely different. It feels like a new start with many opportunities.
 
Just like for the American-Polish couple, being forced to make a major change in life can be a gift.
 
 
HIGHLIGHTS FOR WEEK OF 22 – 28 DECEMBER
 
 
Christmas in Germany
 
I spent a week with my parents in Germany, which was wonderful, except for the outdoor temperatures …
 
 
HIGHLIGHTS FOR WEEK OF 15 – 21 DECEMBER
 
 
On personal improvement
 
This month I have spent much time on personal improvement. After radically minimising my wardrobe and belongings almost two years ago, I have realised that there were still many things I did not wear or use since then, prompting me to conduct another major clean-out.
 
One thing that this clean-out made me think about is how much waste many people (including myself!) generate by buying unnecessary stuff. The tendency of people to keep buying stuff explains why the huge rubbish containers outside my HDB block seem to be constantly overflowing.
 
Considering that all this waste needs to be disposed is horrifying and makes me feel guilty. I hope that these feelings of guilt will make me more conscious before buying new stuff in the future!
 
On a personal level, though, having gotten rid of lots of stuff always feels great. Paradoxically, I usually do not anticipate this. To the contrary, I expect that by getting rid of things, I will experience a great loss. This is of course precisely what makes it so difficult to dispose of our belongings in the first place.
 
There are likely a number of reasons for the discrepancy between our anticipated and actual feelings after departing from things we own, including that our possessions tend to be less important to us than we think and that it is hard imagine what it might be like without them. In addition, I believe that our brain quickly adapts to the new reality of no longer having certain things.
 
Minimising possessions feels good because I find it wonderful to look at non-cluttered spaces. Moreover, it makes me waste less time thinking or worrying about my possessions and maintaining them, which in turn keeps me focussed on the things that are important to me.
 
Just as important as minimising things we own (and trying to maintain them at a minimum) is eliminating mental distractions related to the things we keep and avoiding preoccupation with accumulating new possessions. In fact, I feel that it is less about the amount of things we have, but more about how much time we spend thinking about them and about acquiring more. As such, for me minimalism is less about having a minimal amount of things, but more about thinking less about material things.
 
Not thinking about material possessions tends to be more difficult than getting rid of things, at least for me. One reason is that I seem to be a rather obsessive person, who once obsessed by a thought, finds it hard to let go.
 
As I have discussed before, one major obsession throughout my life has been collecting records. Notably, at present, shortly before the holidays, I have no records lined up on my internet browser tabs that I plan to buy. This may present a unique opportunity to make a change, by trying not to add new records to my collection.
Based on my experience, actually accomplishing this would involve a number of changes in my mindset and behaviour, all of which seem rather difficult!
 
One approach would be to convince myself that it is not worthwhile to keep searching for new music and buy more records.
 
A second approach is to minimise exposing myself to triggers that prompt the search for new records.
 
Finally, it would be helpful if I could overcome the natural human urge to constantly seek novelty.
 
Regarding the first point, I previously discussed the approach to tell myself that it is unlikely that I will discover records that give me more satisfaction than those that I already own. The good part is that this is actually true. Nonetheless, the excitement for potentially discovering something new, and owning it, is difficult to extinguish. I will come back to this below when discussing the third point.
 
For everything we do but did not plan to do, there tends to be a trigger. Trying to minimise my exposure to potential triggers of unwanted behaviour can be quite effective. In the past, I have changed certain routines, for instance by committing to no longer visit certain websites or completely forgoing a habit, often with remarkable results.
 
Avoiding certain websites eventually leads to our brain forgetting the habit. However, what I have noticed is that by killing one habit, another one tends to crop up. The reason for this is that our (or at least my) brain, at times of idleness, seems to constantly search for novelty, which brings me to the last point.
 
In the end, it is the ultimate goal to kill the desire for seeking novelty. The reason why we seek novelty likely lies in the possibilty of discovering something exciting, despite knowing that this actually won’t make us any happier. In fact, it may do the opposite.
 
On the other hand, not exposing myself to novelty tends to make me feel good. For instance, I have recently rediscovered that it feels great to sometimes go to work without listening to music or podcasts because it allows me to notice other things, be more mindful and conscious about the things I plan to do and enjoy during the day. After all, being alive is about being present in the moment and not be distracted.
 
It seems that we have lost the ability to do nothing, which is the subject of a book by Jenny Odell, entitled “How to do nothing”, which I read this year and which made me want to become better at doing nothing. I have great hopes for my one month holiday after the end of the next semester (in a place that I have not, yet, decided on), which I plan to spend doing nothing (or at least very little).

Ultimately, I would like to reach a different mental state where the conditions to feel satisfied change, where just being and doing gives me satisfaction. This is indeed one of my goals for the coming year.
 
 
HIGHLIGHTS FOR WEEK OF 8 – 14 DECEMBER
 
 
Finalising plans for LSM2233 Cell Biology
 
Last week talked about my new plans for my teaching in the coming semester.
 
While the things I have thought of seem exciting, my main worry is whether I will be able to actually design and carry out these activities in a well-coordinated and effective manner. It can appear a bit overwhelming, even though I have felt similarly before other semesters and in the end managed to put my plans into practice.
 
This week, I took a big step towards feeling a bit more at ease by planning the semester in detail, including the various activities I would like to conduct.
 
I also managed to hopefully overcome last year’s problem of disappearing images in the Learning Catalytics team-based learning platform, by identifying a (hopefully!) reliable image hosting website.
 
This means that I can go into my Christmas holiday without worrying too much.
 
 
HIGHLIGHTS FOR WEEK OF 1 – 7 DECEMBER
 
 
LSM2233 Cell Biology
 
Two weeks ago, I discussed my disappointment about my performance in our School’s postgraduate student workshop. What made things worse is that one student from the organising committee told me that she took my Cell Biology course during the Covid pandemic, but seemed not to remember any details.
 
The reason why this shocked me was because I personally felt that the course was amazing and must have stood out to students, too, especially during the pandemic. What I thought differentiated my course from most others during the COVID-19 period was firstly how truly interactive my lectures were. Secondly, there were the team-based learning quizzes, delivered via the Learning Catalytics platform, which I perceived as extremely engaging and fun for students.
 
Indeed, while writing a recommendation letter for my former student Diya, who took the course during the COVID-19 period, I discovered an old email that she sent me during the course, in which she wrote:
 
I would like you to know that it is honestly such a pleasure being part of your class for this module. Even though sometimes the content can be quite challenging, I really, really appreciate the way you deliver it, and how you place so much more emphasis on learning rather than scoring. I find your lectures very exciting, and I want you to know that you have helped me rediscover my joy for learning!
 
Reading Diya’s email reminded me of how much joy and satisfaction teaching can bring and made me feel excited about the coming semester.
 
Moreover, when I tried looking up the student who could not not remember my course during the Covid period, I could not actually find her name in the student list. Thus, perhaps I should not be too worried about this.
 
The thing that makes teaching most exciting, at least for me, is trying out new learning activities. In fact, I have spent the past few months thinking hard about what new activities I could introduce in the coming semester. However, it was only during the last weeks that some things finally materialised.
 
One goal I had was to introduce activities using AI tools other than chatbot-based exercises. Introducing such activities requires that I first learn myself how to use specific tools. Thus, I have invested time to learn how to visualise protein structures predicted by AlphaFold, using ChimeraX, an amazing and freely accessible programme.
 
The next challenge was to then come up with interesting and meaningful exercises. My original plan was to let students pursue protein structure based research questions on the effect of protein ubiquitination on binding of substrate proteins to E3 ubiquitin ligases.
 
One question that I have been interested in for a long time is the binding between E3 ubiquitin ligases and their substrates, specifically whether E3 ligases can distinguish between protein substrates that are non-ubiquitinated and those that are already conjugated with ubiquitin. The reason is that the site within a substrate protein to which the E3 ligase binds is different from the site to which the ubiquitin polypeptides are attached. Therefore, it should in theory not matter whether a substrate is ubiquitinated or not. However, there may be factors other than the E3 ligase binding site that determine the interaction, such as steric hindrance effects.
 
 
However, this idea turned out to be infeasible because AlphaFold does not allow to model ubiquitinated proteins.
Nonetheless, I have managed (and am still in the process) to come up with other exercises in which students can use AlphaFold and ChimeraX to predict or explain the effect of mutations, protein modifications, or interactions with binding partners on protein structure and function.
 
I have also planned an exercise that involves identifying and validating interacting proteins using online search tools and databases and characterising structural aspects of the interaction using AlphaFold.
 
The reason why I feel excited about these exercises is because they resemble real research where students test hypotheses and make their own decisions, and where in some instances there is no predictable outcome and students may even find something new.
 
Another of my plans after the last semester was to introduce more creative thinking-related activities.
Firstly, I am planning to conduct activities in which students need to identify what is unexpected in given datasets. This is something I have done in the past and want to continue and expand in the coming semester. Over the past months, I have read many relevant papers to find good examples.
 
Another idea is to let students come up with a research question. I have tried similar activities in previous semesters. For instance, in one semester I discussed an example of a good research question and then asked student groups to come up with their own research question about any phenomenon they have encountered. Subsequently, the groups had to try to answer their research question by doing literature research and produce an engaging video to discuss their research question. Although this was an interesting assignment, it had the drawback that it did not blend in well with the remaining course contents.
 
In another exercise that I conducted last year, I ask students to propose a hypothesis to a research question based on a scientific talk we watched during class. This activity was less successful. Coming up with good scientific hypotheses proved to be difficult for students, due to a lack of scientific knowledge and awareness of what a good hypothesis looks like.
 
Generating a good research question requires less scientific knowledge than proposing a specific hypothesis. Nonetheless, some theoretical foundations are helpful.
 
In particular, one of my goals for the coming semester is for students to appreciate different types of research questions and recognise what makes a good research question. As such, it will be useful to discuss examples of good research questions, as well as research questions that are less good.
 
According to my own categorisation, research questions (or research objectives) can fall into different groups.
 
At the most basic level, they address a gap in our current knowledge, for instance by trying to identify the mechanism underlying a phenomenon, characterise a missing link in a given pathway, address the physiological significance of a novel mechanism, or develop drugs against a newly identified drug target.
 
At a more complex level, a research question draws connections to other areas or considers the bigger picture.
 
At the most advanced level, a research question is based on uncovering hidden assumptions, recognising previously unnoticed patterns, often by removing commonly assumed constraints. These advanced research questions can take the form of questions aimed at identifying general principles, or of “What-if” questions of potentially broad significance that can lead to scientific breakthroughs.
 
A leading proponent of “What-if” questions is Noubar Afeyan, co-founder and chairman of Moderna and founder and CEO of Flagship Pioneering. In 2010, he and his colleagues at Flagship Pioneering asked the question “Could mRNA be a drug?”. To pursue this question, Moderna was founded, ultimately leading to the development of game-changing mRNA vaccines during the COVID-19 pandemic.
 
Amazing scientist, inventor, CEO, and entrepreneur Noubar Afeyan
 
The research towards such goals often involves the development of new technologies, which enables researchers to ask new questions that could not be answered previously.
 
As highlighted above, it will be crucial to explain to students the different types of research questions and illustrate them using examples. This is probably best achieved in the form of a pre-lecture video. This could then be followed by an in-class activity that.
 
To emphasise essential elements of brainstorming, I plan to initially let the student groups come up and submit a minimum number of research question ideas. Subsequently, student groups could be tasked to rank the questions according to how interesting they are, followed by asking ChatGPT to also rank them and provide feedback on all questions. Finally, students choose and refine the question that their group feels most excited about and explain their final choice.
 
Finally, a reflection from one of my students during the last semester made me think. The student shared with me his thoughts about becoming an interesting and thoughtful person who is able to articulate his ideas and opinions well.
 
Like the student, I am often impressed by how well others are able to express themselves. I suspect that most of the amazing presenters I encounter have actively worked on their communication skills through practice and learning from failure.
 
But as my student also pointed out, in order to be a good communicator, a crucial first step is having one’s own opinions and insights.
 
Indeed, when I read or listen to content created by others, what impresses me most are interesting and unique ideas, opinions and insights. The only way to arrive at individual ideas, opinions and insights is by seeking new experiences and reflecting on them.
 
In the age of AI, standing out as an individual, by having one’s own opinions and original ideas and by being able to communicate these effectively, is more important than ever.
 
As such, I want to continue the practice of reflection assignments, for instance by focussing on the importance of not only being a consumer, but also a creator of content, in the broadest sense of the word. Creating content, i.e. actions, products or ideas that are of value to others, be it our family and friends or the society at large, can be considered one important meaning of life. The earlier we start doing so, the better will we become at it.
 
However, having ideas and being able to communicate them is unlikely enough. Success will ultimately depend on another factor, our personal attributes, e.g. to what extent we are able to put our ideas into practice and overcome difficulties that we are struggling with.
 
As such, topics that I feel are important for students to reflect on are
 
  • to what extent they are not only learning and consuming, but creating content, and
  • how students have managed or plan to overcome difficulties they are facing.
 
HIGHLIGHTS FOR WEEK OF 24 – 30 NOVEMBER
 
 
Podcasts
 
Recently, I have listened to a number of amazing podcasts that had some common theme. First, there was the Rest Is Politics Leading Interview with playwright Graham Green and the main character and actor in his play “The Punch”, Jacob Dunne. Jacob punched someone as a young man, and the victim fell and died as a result of it.
 
The play tracks how Jacob took responsibility for his action by seeking forgiveness from the victim’s parents. This process of restorative justice, where the victim tries to repair harm caused by his or her criminal behaviour, is known to help both offenders and victims, or their relatives, move on and progress in their lives. Yet, in most current prison systems, options for offenders and victims to engage in restorative justice are severely limited.
 
Then there was the BBC Documentary “Raising Children on a warming planet”, in which I learned about how an airline pilot and his climate activist son, despite seemingly being on opposite ends of the climate change debate, managed to find common ground and develop empathy for each other.
 
And finally, there was Natalie Kitroeff’s emotional New York Times “The Daily” podcast, entitled “Parenting a trans kid in Trump’s America”.
 
This podcast told the moving story of two parents, living in the US southern state of Tennessee, and one of their children, which seemed different from their other three kids. From early childhood, the child appeared sad and withdrawn and exhibited many unusual behaviours, such as not acting in a way a boy was expected to and instead, being excited about things that girls typically like.
 
It took the mom until their son had turned 14 until she finally started reading on the internet about gender dysphoria, a condition where children display abnormal behaviours and psychological distress because of a mismatch between their biological sex and their gender identity. She decided to talk to her child about it. Hearing about this conversation, the mom’s and her husband’s determination to support the child, and the child suddenly becoming a happy girl was very moving.
 
However, all this happened in the background of US President Donald Trump’s second term in office and his administration’s denial of the existence of transgender identity and worse, its efforts to prevent any minors from undergoing hormonal gender-affirming medical care. As a consequence, all doctors in the state of Tennessee who previously carried out this type of treatment were no longer able to. In addition to failing to find a doctor who could give their child the therapy that the parents felt it needed, the new radical policies caused changes in the attitude of the public and their friends, and worries that schools would not protect transgender children. All this was heartbreaking.
 
Eventually, the whole family moved north to a blue, Democrat state, and a school that could guarantee their child’s safety. But even there, the consequences of the Trump regime’s policies caught up on them. At one point, the father even contemplated moving to a different country.
 
One thing that these stories made me realise is how quick I myself tend to judge other people’s behaviour. Rory Stewart, co-host of the Rest is Politics, pointed out in one of his podcasts how easy and common it is to take a moral high ground and condemn people for their behaviour, when most of us probably also sometimes do things that we do not want other people to know about. I certainly do.
 
Most people also rarely ask themselves how they would have reacted in the situation others find themselves in, or even what it may be like to be experiencing what others do. If we were in the shoes of other people, would we have the courage to take action or act differently? How often do we ourselves speak up if we encounter things that are not right? How often do we express empathy and offer help when other people need it?
 
There is another aspect of empathy and helping others that I have recently been thinking about: recognition.
I sometimes feel a tendency to expect recognition from others, especially if I invest much in trying to help others.
 
Where does this come from? It may be different for everyone. For me, it is certainly not a need to be admired. Instead, I feel a sense of unfairness when I am not recognised despite doing things better than others. Because I feel a strong sense of fairness, I keenly acknowledge other people’s success, and feel uncomfortable about praise I receive that I feel is not justified.
 
I have always been most inspired by people who do not value any credit for what they do. Since I started this post with podcasts, I have gotten into the habit of listening to the great Sherlock Holmes podcasts from Noiser. What has impressed me is that the great detective usually wants to remain unknown as the person who solved the crime, and that uncovering a case is all the satisfaction he needs.
 
Of course, there are many inspiring real characters who like Sherlock Holmes do not seek credit for what they are doing. Yet, despite not seeking recognition, many of these characters are well recognised. But the reason why they achieved their reputation is because they started out being undeterred by a lack of recognition, because recognition was not something that they ever cared about.
 
 
HIGHLIGHTS FOR WEEK OF 17 – 23 NOVEMBER
 
 
Learning from failure
 
This week, I gave an invited talk in our school for graduate students on how to prepare the research proposal and oral presentation for the students’ PhD Qualifying Exam. I felt genuinely honoured for being invited once again to address this topic. I was also excited about the opportunity, especially because I had a lot of tips and advice I wanted to share.
 
However, as the day approached, I realised that I probably had too many things to share within the time allocated for my talk. So I did what I always advise my students not to do: cram in everything and present as fast as possible. It is indeed interesting how often I do not follow the advice I give to others.
 
The reason why I tried to cover everything was the same reason why many students include too much content in their presentations: I did not know what to omit because I felt that everything was important.
 
The main reason why I am able to help students with their presentations is that I have an informed outsider perspective. Hence, I am able to consider which points are important for the audience and which are not. Therefore, the answer to my dilemma would have been simple: I should have asked someone what I could omit.
 
Instead, I ended up racing through my points, which likely made it difficult for students to follow, especially foreign students.
 
Being under time pressure also has negative effects on me as a presenter. It makes me stressed and focused on finishing my presentation on time, instead of concentrating on the audience and ensuring that the students follow my train of thought and enjoy the presentation. Indeed, my best lectures are those where I do have time to cover my points and can afford to interact with students extensively.
 
So at least, the experience helped me to (re)learn an important lesson. After all, success does not come from avoiding failure, but from overcoming it.
 
 
HIGHLIGHTS FOR WEEK OF 10 – 16 NOVEMBER

 
The Key Podcast: An English Professor Embracing AI
 
One of the areas that has been most affected by the emergence of generative AI is non-fiction writing. People are using chatbots to come up with ideas for articles, to improve their writing, or to do all the writing for them. Naturally, this is especially true in universities. It is hard to imagine that there are many students who do not use chatbots to help them with their writing assignments.
 
As such, I have with great interest listened to a podcast discussing the new Authorship AI tool from Grammarly. The podcast featured a guest, Jenny Maxwell, representing the company behind Grammarly Authorship, as well as one university lecturer, Jenny Billings, who has been using the application in her teaching.
 
Two of the lecturer’s opening remarks resonated with me greatly. Firstly, the lecturer expressed that she feels a moral responsibility towards students to incorporate AI into her teaching.and help students prepare for a future in which AI is abundant. Secondly, she pointed out that students expect to be able to use AI tools in their high stakes experiences, such as exams.
 
With regard to the second comment, I agree that students’ expectations to use AI tools in their assignments and assessments is perfectly reasonably, given that in the real world they are unlikely to carry out similar tasks without using AI. This is indeed why Grammarly developed its Authorship AI tool.
 
What is Grammarly Authorship?
 
Grammarly Authorship can be used in Microsoft Word or Google docs by installing the Grammarly application. When students write with Grammarly Authorship, text that is written by the students themselves, created with AI (i.e. copied and pasted into the document from a chatbot), copied from another website, or edited with Grammarly, is automatically labeled. When students, at the end of their writing process, submit their assay, they can share their report with the lecturer. The lecturer is thus able to see what students wrote by themselves, as well as the sources from which they copied content.
 
Grammarly Authorship also keeps track of the time students take to write an essay. Finally, it records the writing process and the evolution of the essay. If the students share the writing replay with their lecturer, the instructor can literally watch how the students composed their essay.
 
The two podcast guests highlighted various positive outcomes of using Grammarly Authorship:
 
The tool can help students become aware of how they engage with AI tools.
 
Students may feel more assured of being credited for their own contributions.
 
Because students get to see their report before submitting it, they can decide to increase their own input if they feel they have copied too much from other sources, before handing in their assignment.
 
Grammarly is also planning to incorporate new elements in the near future, such as a reader reaction option, which gives authors an idea of what potential readers think of the essay. Another planned tool is a virtual export panel providing feedback on written pieces.
 
While all this sounds exciting, there are also critical voices among many instructors.
 
The first concern is a very practical one. Students could simply use another device to copy AI content by reading off the other device and typing the content manually. This may lead to anomalies in their “writing report” (faster than expected text generation, lack of editing). But nonetheless, this type of cheating is difficult to prove.
 
What this shows once again is that whatever measures instructors take to prevent cheating, students usually find ways around them.
 
The approach used by Grammarly Authorship is called process tracking. An insightful analysis by the Modern Language Association Task Force on AI in Research and Teaching” raised a number of additional concerns about the process tracking approach of dealing with AI.
 
Howard Community College Professor Kofi Adisa highlights that by tracking the process, we disrupt students’ creative writing process. He points out that for him, writing is an individual process: draft something, correct it, or sometimes delete it and start all over. He writes that “if I knew as a student, I had to share my process or worse to see that it was being tracked and that information was somehow in the purview of my professor, I probably would be too self-conscious and worried that my process was judging my writing.”
 
I especially like the criterion he uses for choosing his assessment conditions.
 
“As a professor, I never ask students to do something for the class that I couldn’t do if I were a student.”
 
I personally would certainly not want to share the process of drafting my posts. Not only do my initial drafts look very immature, I often draft ideas that I do not want others to see, and I often start out with much more extreme or entirely different opinions compared what I share in the final product. After all, the writing itself is an idea generation process, which often is quite personal.
 
Another author, Prof. Leonardo Flores from Appalachian State University, makes some additional excellent points. He points out that tools like Grammarly Authorship are essentially built with mistrust as a baseline. As such, they are likely to be viewed by students as surveillance tool. This highlights another motivation for letting students use AI tools freely beyond testing them under real world conditions: to build trust.
 
Leonardo Flores has a second concern. He worries that process tracking “can and will be used by educators to avoid updating their pedagogical practices to account for the impact of generative AI. Like AI detection software, it is simply another weapon in an AI arms race. … Both AI detection and process tracking can be easily circumvented, but the harm they do to writing practices and pedagogical situations are lasting.”
 
As such, my personal answer is not tracking whether and how students have used AI, but instead requiring that they use generative AI tools to solve tasks or to reflect on how it has helped them.
 
 
HIGHLIGHTS FOR WEEK OF 3 – 9 NOVEMBER
 
 
Mitochondrial uncouplers
 
One topic we have researched on in recent years and recently published two papers about are mitochondrial uncouplers. These compounds function by increasing energy expenditure by causing our mitochondria to continuously burn fuels derived from sugar and fat. As a result, uncouplers are able to induce substantial weight loss.
 
On a biochemical level, uncouplers dissipate the proton gradient across the inner mitochondrial membrane. When our mitochondria burn fuel derived from carbohydrates and fats, they convert the released energy into a proton gradient across the inner membrane of our mitochondria, a cellular organelle often referred to as the powerhouse of the cell. Mitochondria then utilise this proton gradient to synthesise ATP, the universal energy currency in our body. Energy-rich ATP is synthesised from its energy-poor precursor ADP via addition of a phosphate group.
 
The generated ATP can then be used to drive all energy-dependent cellular functions, such as muscle contractions, protein synthesis and brain activity. In this process, ATP is converted back to ADP, and the cycle of ATP synthesis in the mitochondria and ATP utilisation in the rest of the cell continues.
 
One important concept required to understand the mechanisms of uncoupler compounds is the coupling of fuel oxidation to the availability of ADP. When cells do little work, there is little ADP produced through energy requiring processes. Under this condition of low ADP availability, the proton gradient across the mitochondria is not utilised to produce ATP and consequently becomes very high. A high proton gradient in turn slows down fuel oxidation because the mitochondria need to pump out protons against an increased gradient.
 
One could compare this to pushing a trolley up a slope. The steeper the slope (i.e., the height gradient), the more difficult it becomes to push up the trolley.
 
Thus, when little ADP is available and the proton gradient accumulates, fuel oxidation slows down because the energy derived from burning the fuel is not enough to pump out more protons.
 
Uncouplers function by providing an alternative route for protons to pass through the inner mitochondrial membrane, without producing ATP. It is like if in an hydraulic power plant we provide alternative channels for water to pass through the dam while bypassing the hydraulic turbines. Water will continuously flow without producing any electrical energy.
 
 
In mitochondria, uncouplers dissipate the proton gradient by shuttling the protons from the outside to the inside of the inner mitochondrial membrane. This creates a futile cycle where fuels are oxidised to create a proton gradient, which is then immediately dissipated. As a consequence, fuels are continuously burnt, resulting in weight loss.
 
Uncoupler compounds achieve this feat by having two important properties. First, they can exist in both a proton-bound and a proton-less state. Second, uncouplers are membrane-permeable in both of these states.
 
One could argue that with the discovery of new blockbuster incretin anti-obesity drugs, such as the GLP-1 agonist semaglutide (e.g. ozempic) and the dual GLP-1 and GIP agonist tirzepatide, the weight loss problem has been solved. These drugs indeed cause amazing weight loss within a short time spans.
 
What is more, they come with few side effects. The most common side effects are gastrointestinal symptoms, which 10% of patients taking semiglutide experience. With tirzepatide, the percentage of gastrointestinal side effects is lowered to approximately 5%.
 
However, after discontinuing these drugs, most patients regain two thirds of the weight they have lost while on the drug. Even if patients continue taking the drug, the weight loss effect tends to plateau after about a year. In other words, patients become tolerant to the drug.
 
While it is likely to be difficult for the body to develop tolerance to uncouplers, these compounds have their own set of side effects. They fall into two categories.
 
The first is on-targets effects. While uncouplers increase fuel oxidation, they do so at the cost of mitochondrial ATP production and therefore cause an energy deficit. This can cause dangerous complications in high energy dependent organs such as the heart if the uncoupler is not well dosed.
 
The proton gradient across the inner mitochondrial membrane stores potential energy, which is normally used to drive the synthesis of ATP. When mitochondria dissipate this proton gradient, the energy that is not utilised to synthesise ATP is instead released as heat. Hence, another common side effect of uncouplers is, potentially excessive, sweating.
 
The second type of side effects are off-target effects on unrelated cellular processes, which are common to most drugs. As such, one of the objectives in drug discovery is to develop compounds with the highest possible selectivity towards the intended target.
 
Common off-target effects of uncoupler compounds include dissipation of the proton gradient in other cellular membranes, and an inhibitory effect on the electron transport chain. In addition, the uncoupler compound niclosamide, which we have been particularly interested in, causes DNA damage.
 
In order to limit the unwanted effects on uncoupler drugs, researchers have pursued different approaches, for instance by developing an orally administered, controlled-release formulation of the uncoupler 2,4-dinitrophenol. This formulation consists of polymers that form a coating around the uncoupler drug. The polymer coating contains small pores, through which the uncoupler is slowly released.
 
As a result, the uncoupler is present in the human body at low but steady concentrations that still exert a therapeutic effect. This avoids high peak drug concentrations that ensue if the uncoupler is administered as a pure drug. Indeed, the researchers found that the controlled release uncoupler formulation has an LD50, i.e. the dose at which a compound is lethal for 50% of tested animals, that is more than 10-fold higher than that of pure 2,4-dinitrophenol.
 
Another potential approach to limit side effects is to target uncouplers to specific tissues. One very successful example is the development of an uncoupler prodrug. This prodrug is normally inactive, but is being converted to an active uncoupler by drug-metabolising cytochrome P450 enzymes. These enzymes are only expressed in the liver. Therefore, the active uncoupler only accumulates in liver tissue, thus avoiding on-target and off-target side effects in other tissues.
 
The uncoupler prodrug was shown by the researchers to be a remarkably potent and safe therapy for fatty liver disease, by promoting the oxidation of fatty acids in liver cells.
 
Notably, fatty liver disease is a major public health problem worldwide, including in Singapore. As I have recently learned from a newsletter published by NUS Medical School, the prevalence of non-alcoholic fatty liver disease (NAFLD) in Singapore in 2019 was estimated at approximately 1,492,000, out of a total population of 5.7 million (of which approximately one third are mostly young foreign workers who are less likely to have fatty liver disease). That would mean that approximately one in three Singaporeans has fatty liver disease. What is more, the incidence of Fatty liver disease in Singapore is expected to increase to 1,799,000 by 2030.
 
In Singapore, rates of type II diabetes also continue to rise, with more than 400,000 people currently living with the disease. This number is projected to reach one million by 2050.
 
Finally, there is obesity, a major risk factor for diabetes. Obesity shows an upward trend in Singapore as well. When using the recommended Asian Body Mass Index cut-offs (which are lower than the thresholds for Western populations), in 2020, 38% of the Singapore population was overweight and 21% obese. As such, people with a body weight in the normal range are currently in the minority in Singapore.
 
This then brings me back to our own research studies, which were aimed at finding ways to circumvent the side effects of uncouplers and understand the structural features responsible for the side effects.
 
In our first study, we tried to target uncouplers specifically to adipose tissue, thus potentially circumventing systemic side effects of these drugs.
 
Many lipophilic compounds (i.e. compounds that mix well with lipids) have been found to selectively accumulate in adipose tissue. Hence, we explored the feasibility of conjugating uncoupler compounds with lipophilic groups. These compounds were synthesised by my former PhD student Mei Ying in the laboratory of Prof. Tan Choon Hong at NTU.
 
 
Mei Ying used hydrocarbon chains of different length, that were linked to the uncoupler compound FCCP via an ether bond.  When we explored their uncoupling activity and found that a chain length of four or eight carbons had the highest activity. The activity markedly decreased with a 12 carbon chain, likely because of limited water solubility of the compound.
 
 
Hence, we decided to investigate the functional uncoupler conjugated with the longest hydrocarbon chain that still exhibited good activity, i.e. the 8-carbon chain conjugated uncoupler. We referred this compound as NMY1009, in line with the convention of labelling novel compounds with the initials of the person who synthesised them.
 
 
After characterising the compound in cells and isolated mitochondria, Mei Ying went to the laboratory of Toni Puig-Vidal at the University of Cambridge to test the compound in mice. In Cambridge, Mei Ying worked with a very experienced and dedicated senior postdoc, Sergio Rodriguez, who guided Mei Ying and led many experiments to advance the work.
 
To test the effect of NMY1009 on weight gain and metabolic parameters, mice were placed on a high fat diet. However, disappointingly, NMY1009 did not elicit therapeutic weight-lowering or blood glucose lowering effects in mice.
 
Through another collaboration with Gopal Venkatesan and Giorgia Pastorin from the Department of Pharmacy at NUS, we eventually found out why. NMY1009 was metabolically unstable. Thus, the ether bond between the uncoupler and the hydrocarbon chain was rapidly cleaved in aqueous solution.
 
Yet another collaborator, Marcella Bassetto from Cardiff University, synthesised a lipophilic version of another uncoupler compound related to 2,4-dinitrophenol, which I discussed at the beginning of the post. The dinitrophenol analog carrying an 8-carbon hydrocarbon chain, which was also conjugated via an ether bond, had an even higher uncoupling activity compared to its parent compound. However, like NMY1009, the lipohilic dinitrophenol analog was also highly unstable in vivo when administered to mice.
 
Therefore, although conjugation of a hydrophobic hydrocarbon chain to uncoupler compounds resulted in sustained or improved uncoupling activity, linking lipophilic hydrocarbon chains via ether bonds leads to metabolic instability. This was a rather unexpected result. It taught us the lesson that in the future it would be necessary to conjugate lipophilic groups via other chemical bonds.
 
 
The goal of our second study was to find out which structural features in two of the most commonly used uncouplers, niclosamide and FCCP, are responsible for their side effects. As mentioned above, niclosamide causes DNA damage, while FCCP causes an inhibitory effect on the electron transport chain.
 
To carry out our study, our collaborator Marcella from Cardiff synthesised a number of niclosamide analogues. We also used some FCCP analogues that were synthesised by Mei Ying.
 
With the help of these compounds, we managed to obtain some interesting insights. By comparing the structural features and activities of the different analogs, we identified FCCP analogues that do not inhibit mitochondrial oxygen consumption but still provided good, although less potent, uncoupling activity. For niclosamide, we identified the two functional groups responsible for the uncoupling activity of the compound and characterised the role that these groups play. We also deduced that a nitro group that is crucial for exerting DNA damage, is also critical for promoting high uncoupling activity.
 
These structural investigations provide important information that could aid further drug development. For instance, given the critical role of the niclosamide nitro group in both uncoupling and DNA damage, it may be possible to substitute this group with other functional groups that exert critical electron-withdrawing properties to promote uncoupling without causing DNA damage.
 
Reassuringly, a recent publication by another group has confirmed our findings with niclosamide.
 
There is indeed much interest in niclosamide because the compound is an FDA-approved drug that is currently in clinical use to treat parasitic tapeworm infections. In recent years, niclosamide has been considered for use as a repurposed drug in the treatment of various diseases, including diabetes, fatty liver disease and cancer.
 
Notably, apart from uncoupling mitochondria and causing genotoxic stress, niclosamide has also been shown to be a potent inhibitor of the mammalian (or sometimes also called ‘mechanistic’) target of rapamycin complex I (mTORC1 in short).
 
mTORC1 is a protein kinase complex that modifies various substrate proteins through phosphorylation. It is one of the most important kinases in our cells and one, if not the, most intricately regulated cellular kinases.
 
One of the mTORC1 phosphorylation targets is a transcription factor called TFEB (short for Transcription Factor EB), which is responsible for inducing the production of more lysosomes. Lysosomes are cellular organelles responsible for digesting damaged macromolecules or parts of the cells and in this process, generate nutrients during times of starvation.
 
Interestingly, we recently found that niclosamide induces a dramatic translocation of TFEB into the nucleus. Although this may be simply a consequence of mTORC1 inhibition by niclosamide, there is recent evidence that TFEB nuclear translocation is physiologically independent of mTORC1 activity, but instead dependent on regulatory complexes that are upstream of mTORC1.
 
As such, we are currently investigating the mechanism through which niclosamide regulates TFEB and why the effect is specific to niclosamide and not observed with other compounds that disrupt mitochondrial function.
 
I am happy that our paper on niclosamide has already been cited around ten times within the first year of its publication, and that, as mentioned above, a recent paper has confirmed many of our results (and acknowledged our study). As such, I was taken by surprise when recently I received an email from our faculty telling me that because our paper was not published in the school-approved list of scientific journals, I might be penalised, instead of being acknowledged, for the published paper.
 
I in fact did not check whether the journal in which we published our paper, an open access journal called FebsOpenBio, is part of the approved list of journals. However, given that our university has a publishing agreement with FebsOpenBio, which covers the open access publishing fee for NUS researchers, it did not occur to me that our school would not allow us to publish in this journal.
 
A major reason why our school came up with an approved list of journals is a metric used to measure research performance by universities, the so-called the Field weighted citation impact (FWCI). The FWCI compares the number of times a publication is cited to the average number of citations for similar publications. Here, “similar” publications relates to the publication year, the publication type (e.g. a original research article or a review paper), and the subject area, all of which are known to affect the citation rates.
 
Based on this definition, publishing papers that do not attract many citations will lower the FWCI of an institution. Thus, given that lower ranked scientific journals are less likely to produce many citations (as the ranking is based on the average citation rate of papers in a journal), it makes sense to discourage authors to publish in journals with low rankings. As such, the idea of coming up with and approved journal list seems like a well-intentioned measure.
 
However, there are in my opinion also problems with this approach.
 
The FWCI was intended to de-emphasize the obsession of academic institutions with journal rankings  (which are based on s-called Journal Impact Factors). Evaluating a publication based on the impact factor of the journal in which the paper has been published can be very misleading because the citation rate of an individual paper could be very different from the average number of citations of papers published in the same journal.
In contrast to the journal impact factor, the FWCI only considers the citation rate of individual papers.
 
However, defining a list of approved journals primarily based on impact factor-based journal rankings defeats the very reason why the FWCI was developed.
 
Secondly, the policy encourages researchers to publish more comprehensive and potentially more impactful paper. This is potentially a good thing, given the vast amount of papers that is published and the large number of papers that go unnoticed. However, the reason why so many papers go unnoticed is in my opinion not because too many papers are published, but because most papers do not answer relevant scientific questions.
 
What the white list policy also does is make it more difficult for undergraduates to publish a first author paper. Publishing a paper helps undergraduates succeed in securing scholarships and other opportunities. In fact, without having published a couple of papers as an undergraduate, I likely would not have been able to pursue a research career. Moreover, publishing a paper is also a strong motivating force and an immensely useful learning experience for undergraduates.
 
Luckily, it is possible to ask our Head of Department for approval to publish a paper in a non-white listed journal, which I will be sure to try the next time around.
 
 
HIGHLIGHTS FOR WEEK OF 27 OCTOBER – 2 NOVEMBER
 
 
Two weeks ago I posted one of two research updates that I wrote back in 2020 and never published. Here is the second one:
 
Sensing Sugar
 
In this post, I would like to discuss an research paper that made some surprising discovery about how we sense sugar. The article starts out with the astonishing fact that the average American consumes more than 45kg of sugar per year. The paper then investigates how we sense sugar. And equally astonishingly, it is not (only) via the mouth.
 
According to common knowledge, sweet compounds are detected by taste receptors on the tongue and palate epithelium. These taste receptors also sense artificial sweeteners, which were introduced more than four decades ago. However, when given to mice, artificial sweeteners are not as rewarding as real sugars in long-term choice tests.
 
This suggested that perhaps something is different about sensing of real sugar and artificial sweeteners. Indeed, it has been previously observed that mice lacking the ability to taste sweet compounds still develop a preference for sugar. And this sweet taste receptor-independent craving for sweet food is only elicited by real sugar, but not by artificial sweeteners.
 
The paper began with an interesting experiment, in which the researchers exposed mice to either sugar solution or artificial sweetener (acesulfame K) solution. Initially, the mice showed no preference. However, after half a day the mice started to switch to the sugar solution, eventually almost exclusively drinking the water containing glucose. The same response also happened in mice that lacked sweet receptors on their tongue and palate. These results hence suggested that sugar, but not artificial sweetener, can be sensed after ingestion via the mouth.
 

Sugar activates a gut–brain sugar sensing axis. Left figure: Mice were allowed to choose between 600 mM glucose and 30 mM acesulfame K artificial sweetener. Preference was tracked by electronic lick counters, and the experimental results are presented in the right figure: The bars on top of the figure illustrate the lick frequency for glucose (red) versus artificial sweetener (blue) during the first and last 2,000 licks of the behavioural test. The results show that initially the mice have no preference, but by 48 h a clear preference for glucose over artificial sweeteners can be observed.
 
The researchers hypothesized that sugar may be detected by specific sensory neurons in the gut and stimulate particular brain regions that mediate sugar preference. Thus, the researchers tried to identify neurons that become activated when giving mice glucose. They did this by staining sections of the brain to detect increased expression of the transcription factor c-fos, a marker of neuronal activation. They found that a region called the caudal nucleus of the solitary tract (cNST) becomes activated upon ingestion of sugar, but not artificial sweetener or water. Importantly, the cNST brain region also became activated when the sugar (but not sweetener) solution was infused directly into the stomach, bypassing taste receptors in the mouth.
 
Detection of glucose in the gut leads to the activation of the caudal nucleus of the solitary tract (cNST) in the brain.
 
The researchers then hypothesized that the signal from the gut to the brain is transduced via the vagal nerve. To prove this, they performed a rather complicated experiment. They monitored the activation of the cNST region in the brain via a techniques called fibre photometry. They expressed in the cNST neurons a genetically encoded fluorescent sensor protein that becomes fluorescent when the calcium concentration in the neuron increases. An increase in the cellular calcium concentration indicates activation of the neuron. To detect the increased fluorescence, they implanted an optical fibre into the brain near the cNST region. When they then delivered glucose into the intestine (duodenum) via a catheter, they observed that the fluorescence in the excitatory cNST neurons increased. Importantly, this response was absent in mice with disrupted vagal nerves. This confirmed the presence of a gut-brain sugar sensing axis that is mediated by the vagus nerve.
 
In another complex experiment, the researchers showed that the neurons in the cNST region of the brain are required for the sugar preference. Showing that something is required usually involves removing it and seeing if the effect is still there. Hence, to determine if the neurons are required for sugar preference, the researchers removed these neurons and determined if the mice loose the preference for sugar. To specifically remove the cNST neurons, the authors expressed the tetanus toxin in these cells, which leads to cell death. The DNA encoding the tetanus toxin was delivered into the cNST region via Adeno-associated virus (AAV), which was injected into the specific brain region. However, expression of the tetanus toxin required recombination of the coding DNA sequence, which was mediated by a Cre recombinase. The Cre recombinase coding sequence was placed downstream of a promoter that becomes activated by c-fos (a transcription factors that become activated upon neuronal excitation). Hence, in the actual experiment, the virus was injected into the brain region, and then the mice were given sugar water. This led to the activation of the cNST neurons, activation of the c-fos transcription factor, expression of Cre recombinase, recombination of the tetanus toxin coding sequence and subsequent expression of tetanus toxin in these neurons. This resulted in death of the sugar activated neurons.
 
When subsequently the researchers gave the mice glucose again, a preference for glucose was no longer observed, showing that the cNST neurons are indeed necessary for sugar sensing in the gut – a rather complicated experiment to show that activation of cNST neurons is required for sugar preference. But nonetheless an important one, which may in the future even have therapeutic implications, because it suggests potential therapeutic targets to inhibit sugar craving in humans.
 
With all this knowledge, the researchers then tried to rewire the whole pathway and make mice crave a different stimulus, much in the spirit of physicist Richard Feynman, who said, “What I cannot create, I do not understand”. To do this, they first identified the exact neurons in the cNST brain region that mediate the sugar preference. This was accomplished by stimulating mice with sugar and then staining sections of the cNST region for the transcription factor c-fos (to label the neurons that became activated). The researchers found that the activated neurons are positive for expression of the specific neuronal marker protein proenkephalin. Hence, sugar activates specific neurons that express proenkephalin. The authors then expressed a synthetic “designer” receptor in these proenkephalin-positive neurons. The designer receptor responds to the drug clozapine, which binds to the receptor ligand binding domain on the cell surface. Binding of clozapine to this designer receptor then results in the activation of the neurons.
 
In the actual experiment, the researchers gave mice artificially sweetened cherry-flavoured and grape-flavoured solutions as drinking water. The grape solution was mush less sweet, but contained clozapine. What the researchers found was that after 48h the mice switched to the less sweet, but clozapine containing grape-flavored solution. This suggests that the clozapine, after ingestion and travel through the blood circulation, was able to activate the designer receptor expressing cNST neurons and mediate preference for grape-flavored solution. This confirms that activation of the specific cNST neurons was enough to create sugar preference. And the mice expressing the synthetic receptor for clozapine could in theory made to eat anything, as long as clozapine was added to the food.
 
In the experiment, a plasmid encoding for the clozapine designer receptor (called DREADD) was injected into the cNST area. The designer receptor was only expressed by glucose sensing proenkephalin neurons, which was achieved by using a similar approach as described above for the tetanus toxin (i.e., expression of the DREADD receptor required a recombination event, which was mediated by Cre-recombinase, which in turn was expressed from the proenkephalin gene promoter). Upon expression of the clozapine receptor in proenkephalin positive cells, the mice were then tested for their preference between two flavours for 48 h. Shown is an example using cherry-flavoured drink (containing 2 mM acesulfame K artificial sweetener) versus grape-flavoured drink( with 1 mM acesulfame K). After initially conditioning the mice (Pre), clozapine was added to the less-preferred flavour. This resulted in a switch in preference to the less sweet grape solution, which is shown in the diagram on the right. The preference for the less sweet grape solution was initially less than 0.5 and hence not preferred. After adding clozapine to the less sweet grape solution, the preference for this increased in all mice to above 0.5 (which means the mice licked the less sweet grape solution more often than the sweeter one).
 
The study uncovered the entire neuronal pathway through which sugar preference is mediated. But it did not really clarify how the gut senses the glucose taken in by the mice. There are different ways through which the gut can sense external stimuli. There could be peripheral sensory cells in the gut mucosa. Alternatively, the sensory cells could be located further up and send sensory nerve endings to the stimulant site to detect specific external signals (see the diagram below taken from wikipedia).
 
 
In the case of glucose sensing in the gut, it seems likely that there must be sensory neurons in the mucosal wall of the intestine, which upon stimulation by glucose activate vagal nerve endings in the gut. As a first step to characterize the sensing mechanism, the authors have shown that glucose sensing in the gut is dependent on the main glucose transporter in our gut, SGLT1. Interestingly, the glucose analogue 3-O-methylglucose, which can also be taken up via SGLT1 but is not further metabolized in cells, can also stimulate the gut-brain sugar axis. This suggests that glucose metabolism is not necessary for the sensing mechanism. Also of note, fructose is not a substrate for SGLT1, and as expected, fructose does not activate this sugar sensing pathway. This is likely a reason why the combination of glucose and fructose (in table sugar or high-fructose corn syrup) is particularly harmful: Glucose mediates the craving and fructose, taken up by the gut by other sugar transporters, mediates the adverse effects via its metabolism in the liver.
 
In conclusion, the study has described a novel mechanism through which glucose can be sensed in our gut. This may have important implications. For instance, the study identified a number of potential targets in the gut-brain sugar sensing axis to develop drugs to inhibit sugar craving. The authors also suggest that “it may be possible to develop a new class of sweeteners that activate both the sweet-taste receptor in the tongue and the gut–brain axis, and consequently help to moderate the strong drive to consume sugar.”
 
 
HIGHLIGHTS FOR WEEK OF 20 – 26 OCTOBER
 
 
De Novo Protein Design from Natural Language
 
Recently, a number of papers have been published describing tools for de novo protein design based on large language models. These models can use text or keyword-based information, describing the desired function of a protein, to predict the amino acid sequences of novel proteins that fit this description.
 
One model with a highly impressive performance is Pinal, described in a recent pre-print publication by Dai et al. (2025). Pinal can design novel proteins based on natural language descriptions of the desired protein function.
 
As the authors point out, translating language input directly into sequence information is difficult. Therefore, the model takes a two-step approach. The natural language descriptions are first translated into structural information. To achieve this, the authors use discrete structure tokens that are generated by the “vector quantization method“. This method “works by dividing a large set of points (vectors) into groups having approximately the same number of points closest to them”, thereby selecting “a set of points to represent a larger set of points”.
 
Subsequently, the model converts this natural language-based structural information, together with the language description, into amino acid sequences. By using structure tokens as an intermediate representation, the authors substantially improved the performance of their protein prediction model.
 
To train their model, the researchers created a dataset named SwissProt-Aug, which consists of 4 million natural language–protein pairs. Thus, in SwissProt-Aug, functional protein descriptions of proteins are paired with their sequence, based on the Uniprot database.
 
However, this dataset was too small for training of the model. The authors also point out that relying solely on the Uniprot PDB database of well-characterised proteins for training the model results in protein predictions with good foldability and good alignment with the natural language instructions. However, the generated protein sequences exhibit a strong similarity to known proteins. In other words, the newly designed proteins display a low degree of novelty.
 
In contrast, the authors point out that “biologists are more interested in finding proteins that are structurally distinct from known proteins”. Therefore, the authors made use of “synthesized data”, i.e. proteins with structures predicted by the Alphafold neural network.
 
As such, the authors used the UniRef50 database, containing two orders of magnitude more proteins compared to the PDB database. The proteins in the UniRef50, however, lack highly accurate annotations. To functionally annotate these proteins, the researchers used a new protein database, named ProTrek to annotate these proteins.
 
ProTrek is a tri-modal protein language model that can derive textual information from both sequence and structural perspectives.
 
According to the Google AI summary, “ProTrek is a computer model that understands proteins by looking at three things: their amino acid sequence, structure, and function. It’s like a translator that can search for proteins by linking any of these three aspects together, allowing researchers to quickly find related proteins even if their shapes are different, but they perform a similar function.”
 
ProTrek can be queried via a web-based prompt, and according to Google AI, the model can answer a variety of search questions, such as:
“Find proteins with this function, even if their 3D shapes are different.” “Find proteins with a similar structure to this one, and see what their functions are.” “Find the function of this protein, given its sequence.”
 
By using the last functionality, the researchers who developed Pinal were able to functionally annotate the proteins from the UniRef50 database. This resulted in a total of 400 million natural language-protein pairs that could be used for training Pinal.
 
After training their model, the authors evaluate the foldability, language alignment and novelty of the predicted proteins. ESMfold, a protein language model that like AlphaFold, predicts 3D protein structures from amino acid sequence input, was used to assess foldability. To evaluate the language alignment, the researchers again used the ProTrek model. Finally, to evaluate the novelty of the designed proteins, the researchers compared their structure to known structures in the PDB database.
 
The authors then compared their Pinal protein design tool with other existing systems, using a test set of proteins with known function, sequence and structure. They compared it to ProteinDT and Chroma, protein design tools which, similarly to Pinal, accept long sentence instructions. Thus, they use instructions summarising the known function of the test set protein and ask the various models to design 5 proteins based on the long sentence instructions.
 
To evaluate how well the sequence and structure of the predicted proteins matches their description, the researchers used ProTrek (which, as mentioned above, can derive textual information from sequence and structural input). “For both sequence and structure prediction, the authors reported the maximum and mean scores among the 5 generated proteins from language instruction.”
 
Notably, the researchers found that the ProTrek scores (i.e. the alignment of language instruction with the sequence and structure of the designed proteins) of proteins designed by ProteinDT and Chroma shows no significant difference compared to arbitrarily selected proteins from Swiss-Prot (see figure). In other words, the likelihood with which these design tools predict functional proteins from text descriptions is no greater than chance. In contrast, Pinal produces ProTrek scores that are almost as high as those based on the known pairings of description with sequence and structure of the protein test set (= ground truth scores).
 
 
The authors also evaluate the foldability of the predicted proteins using AlphaFold. They found that proteins designed by Pinal exhibit foldability scores comparable to protein sequences from well-characterised proteins from the Swiss-Prot database. This indicates that the Pinal design tool predicts proteins with good structural plausibility. In contrast, sequences from Chroma and ProteinDT often failed to fold into 3D structures.
 
Comparison of Pinal and ESM3
 
ESM3 is another recently published large language model-based protein design tool. In ESM3, the three protein modalities sequence, structure and function (in the form of keyword text information) can be interchangeably used as input or desired output. In the application examples provided in the original paper, the authors used functionality keywords, sequence information (e.g. the desired length of the designed protein), plus structural coordinates as input.
 
For instance, to design novel green fluorescent proteins, the authors used as input prompts “fluorescence” (as the functionality keyword), a protein length of 229 amino acids (as sequence prompt), and as structural prompt, the identity and position of the six amino acids critical to form the chromophore in the native Green Fluorescent Protein from jellyfish, as well as the structural arrangement of these amino acids, as shown below.
 
Protein design prompt (left) used to generate the synthetic GFP candidate on the right of the image.
 
Native jellyfish GFP consists of a beta barrel that forms a cage to facilitate the formation of the chromophore in its interior. ESM3 designed a protein with a similar overall structure and spectrometric properties related to known natural fluorescent proteins.
 
The designed protein sequence was 58% identical to the closest natural fluorescent protein. Based on this, the authors determined that this difference in protein sequence “represents an equivalent of >500 million years of evolution from the closest protein that has been found in nature”.
 
To compare the Pinal protein design model to ESM3, the developers of Pinal used keywords for a fair comparison, given that ESM3 only recognises keywords but lacks understanding of natural language.
 
The researchers found that Pinal achieves a higher ProTrek score, indicating better alignment of the predicted sequence and structure of the designed proteins with the language instruction. Furthermore, while half of the proteins from Pinal exhibit predictable functions, only around 10% of the proteins generated by ESM3 do so. Finally, the proteins designed by Pinal also exhibit better foldability.
 
The researchers also carried out laboratory experiments to validate Pinal. They gave Pinal the prompt “Please design a protein that is an alcohol dehydrogenase”, and then evaluated whether the protein designs predicted by Pinal were functional. Impressively, five of the eight selected alcohol dehydrogenase sequences could be successfully expressed and purified, and exhibited the ability to oxidise ethanol to produce NADH from NAD+ in an enzyme activity assay.
 
Why have I been so keen to learn more about these protein design tools? The main reason is that these models might be an excellent teaching tool to let students learn by doing original and potentially exciting research. Similar to the described validation experiments, students could design novel proteins, evaluate these designs computationally, and eventually produce these proteins and test their activity in laboratory experiments.
 
What makes this type of lab practical exciting for students is firstly that the students can decide by themselves what kind of novel proteins they want to design. Secondly, the outcome of the experiments is unknown, which resembles what real research is like.
 
Moreover, this kind of activity is also useful as students get to solve real problems along the way. As such, I am currently trying to find a way to conduct these activities in a postgraduate course next year.
 
 
HIGHLIGHTS FOR WEEK OF 13 – 19 OCTOBER
 
 
I recently realised that five years ago I wrote two research updates on sugar metabolism that I never published. In order to not let them go to complete waste, I decided to publish them in my weekly highlights. Here is the first.
 
5 July 2020
New Research: How bad is fructose for us?
 
Recently, I came across a number of interesting papers on fructose. Fructose is commonly taken in as table sugar, high-fructose corn syrup and also as fruits. Table sugar, or sucrose, is a dimeric sugar consisting to equal parts of glucose and fructose. High-fructose corn syrup, which is added to coke and other soft drinks, contains 55% fructose and 45% glucose.
 
First, some not-so-fun facts about fructose and sugar in general from Veronique Douard and Ronaldo Ferraris review on fructose transporters. Centuries ago, sugar used to be much more expensive. And so in 1700, Europeans consumed only about 5 grams of sugar per day. In 1950 it was 120 grams per day and today it is 180 grams per day on average. If we consider that 1 gram of sugar has 3.9 calories, then an average person consumes about 700 calories of sugar alone. That is about a third of the recommended calorie intake (which is 2000 calories for women and 2500 for men). So one can imagine how many calories we can save if we just stop to eat and drink sugar. One can of coke alone contains 39 grams of sugar, of which half is fructose.
 
There are some important differences between glucose and fructose. While glucose can be taken in and used by all cells in our body, fructose is only metabolized in the liver. This is because only the liver has the necessary transporters and enzymes to utilize fructose. (The intestine also expresses fructose transporter proteins in order to absorb fructose from the food to then be transported into the liver.) If we take in a lot of fructose with a meal or sugary drink, one can hence imagine that the liver as the only fructose-metabolizing organ is overwhelmed. The available fructose exceeds the needs of liver cells for energy production and hence, the liver cells are unable to utilize all the fructose. The excess fructose is converted into lipids, or fat, which ends up in the liver (as fatty liver) and other organs.
 
What is also different between glucose and fructose is that glucose metabolism is highly regulated. As a result, cells only take in glucose if they are in need of it, for instance to produce energy. Fructose metabolism, on the other hand, is not subject to such regulatory mechanisms. Hence, liver cells will continue to take up fructose beyond their needs, leading to more lipid accumulation.
 
Evolutionary, when our ancestors consumed lots of fruits, conversion of fructose into fat was a beneficial mechanism, because the stored fat allowed humans to survive during periods of food shortage. However, this survival mechanism has turned into a metabolic liability in modern times, where we consume vastly increased amounts of fructose.
 
In one of the papers published by Andres-Hernando et al. in Cell Metabolism in July this year, the authors looked at the role that fructose has on the sugar-induced metabolic syndrome. The metabolic syndrome comprises of a set of symptoms (central obesity, elevated fasting blood glucose concentrations, high blood pressure, elevated blood triglyceride concentrations, elevated blood cholesterol levels), and sugar increases all of them. The paper has a number of interesting findings:
 
Firstly, the authors found that glucose stimulates fructose uptake (see the figure below). What that means is that the combination of glucose and fructose is particularly bad. (I initially thought that this may explain why fruits, which contain lots of fructose, are generally considered healthy. However, I realized that fruits also contain both fructose and glucose at a ratio of about 2:1. Hence, there must be other reasons why fruits are good for us despite being rich in sugar, which probably has to do with the high fiber content.)
 
Fructose levels in portal vein blood in mice after giving the mice water (black), fructose alone (green) or the same amount fructose together with glucose at an equal ratio (purple).
 
The authors then prevented fructose metabolism in cells. Fructose is normally taken up into cells via specific transporters. The first step in fructose metabolism is then the phosphorylation to fructose-1-phosphate, which is mediated by fructokinase. Fructokinase is also known as ketohexokinase (KHK) and exists as two separate isoforms, KHK-A and KHK-C. In the study, the authors knocked out both KHK-A and KHK-C. As a result, fructose metabolism and utilization is prevented. What the researchers found is that the KHK-A/C knockout mice, when exposed to high fructose+glucose diet, gained much less weight, showed less fat accumulation, did not develop fatty liver and maintained improved insulin sensitivity compared to the normal wild type mice. This means that it is actually the metabolism of fructose that is responsible for all the problems associated with sugar intake.
 
30-week body weight (BW) gain in normal wild type (WT) and fructokinase-deficient (KHK-A/C KO) mice exposed to water (black), 10% fructose + glucose (WT, purple open circles), or 30% fructose + glucose (KHK-A/C KO, purple closed circles). Fructokinase deficient mice gained much less body weight despite being exposed to more fructose + glucose.
 
Representative histological staining (H&E) images of liver sections from normal wild type (WT) or fructose deficient (KHK-A/C KO) mice exposed to water or fructose + glucose (FG). Wild type mice develop fatty liver, as apparent from the accumulation of white lipid droplets. No lipid accumulation was observed in the liver of KHK-A/C KO mice. PT, portal triad; CV, central vein.
 
Mice lacking fructokinase actually took in less fructose via the gut. To compensate for this, in the experiments shown above the researchers actually gave 30% fructose + glucose to the fructokinase knockout mice, but only 10% to the wild type mice. To have a fairer comparison, the researchers then knocked out fructokinase only in the liver. As expected, this did not affect fructose uptake via the gut. Interestingly, despite the fact that fructose and glucose uptake is similar in wild type and liver-specific fructokinase knockout mice, absence of fructokinase in the liver completely protected mice from sugar-Induced metabolic syndrome. The mice lacking fructokinase in the liver did not show any increase in body weight, development of fatty liver, insulin resistance, increase in white fat and decrease in brown fat. The latter findings indicate that preventing fructose uptake in the liver even has effects on distant organs such as the adipose tissue.
 
Overall, the results suggest that when we take in sugar, the main problem is the metabolism of the fructose in the liver. Liver metabolism of fructose is responsible for all the problems associated with sugar intake. If metabolism of fructose in the liver is prevented, mice stay healthy. So staying away from fructose containing table sugar and high-fructose corn syrup in soft drinks is really a good idea.
 
Excessive amounts of sugar (fructose and glucose) induce fatty liver and metabolic syndrome in normal WT mice by inducing greater sugar intake thus leading to increased fructose metabolism via KHK. The absence of KHK in the liver completely prevents fructose metabolism, leading to its excretion into the urine.
 
Interestingly, in another study the same group of researchers has previously shown that glucose by itself (without the addition of fructose) also causes signs of the metabolic syndrome in mice. In the study. the researchers provided mice for 14 weeks with drinking water containing 10% glucose (or tap water for the control mice), with an otherwise normal diet. For comparison, a can of coca cola contains 9 grams of sugar per 100 ml (9%). The authors found that glucose water caused a markedly increased body weight, increased fat accumulation (specifically accumulation of visceral fat) and elevated insulin concentrations (see figures below).
 
The researchers then tested the effect of blocking fructose metabolism. This was again done by using mice that lack fructokinase (also known as ketohexokinase = KHK-A/C), the first enzyme in the intracellular fructose metabolic pathway. Even more interestingly, and surprisingly, the effects of glucose on body weight, fat accumulation and insulin levels were, at least partially, blocked by preventing fructose metabolism.
 
Effect of glucose consumption on body weight in normal wild type (WT) mice and mice that are unable to metabolize fructose (fructokinase or KHK-A/C knockout mice). The results show that in normal mice, drinking glucose water markedly increases body weight compared to drinking tap water. Inhibiting fructose metabolism partially inhibits the glucose-induced body weight gain.
 
Normal wild type mice that drink glucose water have greater fat accumulation and insulin concentrations in the blood, indicative of insulin resistance. These effects are markedly inhibited when fructose metabolism is blocked in the KHK-A/C knockout mice.
 
In addition, glucose consumption also caused the development of fatty liver and elevated the amount of fat in the form of triglycerides in the liver (see figure below). Again, this effect was prevented if metabolism of fructose was blocked.
 
The figure shows that histological staining of liver sections. White lipid droplets accumulate upon consumption of glucose water in normal wild type mice but not in fructokinase deficient KHK-A/C knockout mice.
 
How can these unexpected findings be explained? It turns out that glucose can be converted into fructose via the so-called polyol pathway. Indeed, giving glucose to mice led to elevated fructose concentrations in the blood and liver. Importantly, preventing glucose conversion into fructose by knocking out the first enzyme of the polyol pathway, aldolase reductase, also inhibited body weight gain and fat accumulation in the liver upon giving mice glucose in their drinking water (see figure below).
 
Growth curves and liver fat (intrahepatic triglycerides) accumulation in normal wild type (WT) mice and mice that lack the enzyme aldolase reductase (AR knockout mice) and are unable to convert glucose to fructose via the polyol pathway.
 
In the polyol pathway, glucose is converted to fructose via two enzymes, aldolase reductase and sorbitol dehydrogenase. Interestingly, glucose was found to actually increase aldolase reductase (but not sorbitol dehydrogenase) activity, thus promoting its conversion to fructose.
 
 
The researchers have also shown that the metabolic effects of giving mice glucose water is not due to greater energy intake. Likewise, the protective effect of blocking fructose metabolism is also not due to lower calorie intake. These findings are important because they show that the harmful effects of glucose (and sugar in general) are not due to taking in more calories. Instead, glucose and fructose are harmful because they directly elicit damaging effects compared to more complex carbohydrates. These damaging effects are partly due to the very fast absorption of sugar and the resulting spikes in the blood sugar concentrations as well as the fact that fructose is only metabolized in the liver. In other words, if we take in sugar, it is the sugar itself and not the calories associated with it that cause the major health problems.
 
Nonetheless, it is important to keep in mind that the study was done in mice. Hence, it is currently not clear whether similar mechanisms operate in humans. However, the safest option is still to try to resist the temptation to eat too much sugar. This is how I managed to do it. But I subsequently discovered that a complete, lifelong commitment to abstinence approach works much better for me in most cases. This approach allowed me to make many other diet changes over the past five years.
 
 
HIGHLIGHTS FOR WEEK OF 6 – 12 OCTOBER
 
 
The promises of using AI in scientific research
 
The most common way through which researchers currently utilise generative AI tools is likely to help them write manuscripts and grant applications. Indeed, according to a recent commentary in Nature, “an analysis of biomedical abstracts published in journals in 2024 found that at least 13.5% had LLM-generated text in their abstracts (Kobak et al. 2024)”. In 2025, this percentage is likely to be considerably higher.
 
In theory, researchers using AI tools to improve their writing is a good thing. It can save them valuable time and improve their applications. Nonetheless, as I have discussed recently, it would be preferable if there were a requirement and expectation that authors acknowledge the use of AI assistance.
 
In addition, scientists increasingly use AI in the actual research process. An example discussed in the Nature commentary mentioned above is The AI Scientist, an AI system that is aimed at achieving “a fully automated, open-ended scientific discovery process”.
 
The AI Scientist system was announced in 2024 by the Tokyo-based company Sakana AI and was described by researchers from the company in a preprint publication. This year, the researchers have published an advanced version of the system – The AI Scientist-v2.
 
The AI Scientist conducts fully automated research in computer science. The tool uses a large language model (LLM) to generate ideas, write code, and execute experiments. The AI Scientist then writes up the results as a proper research paper. The declared ultimate goal of Sakana AI is to have AI systems conduct their own research and make their own discoveries.
 
However, the Nature commentary also highlights that the papers generated by the AI Scientist sparked some controversy because the ideas and methodological approaches seem to resemble previously published papers.
The commentary cites two computer scientists at the Indian Institute of Science in Bengaluru, Tarun Gupta and Danish Pruthi, who have uncovered multiple papers generated by the AI scientist system that used others’ ideas without acknowledging them.
 
Lack of acknowledgment of previously published research in the scientific literature is not a new phenomenon brought on by generative AI. I have encountered many instances where researchers did not cite previous studies that came to very similar conclusions, including a lack of citing some of our own studies.
 
For instance, in our 2011 paper, entitled “Inhibition of Cullin RING Ligases by Cycle Inhibiting Factor: Evidence for Interference with Nedd8-Induced Conformational Control”, we proposed a mechanism through which the bacterial effector protein Cycle Inhibiting Factor (Cif in short) inhibits a large family of cellular RE3 ubiquitin ligases, the so-called Cullin E3 ligases. It was known that Cif deamidates a specific glutamine amino acid in the small protein Nedd8, which is part of the Cullin E3 ligase complex, and that the role of Nedd8 is to induce a conformational change in the Cullin E3 ligase complex, leading to its activation. Based on our experimental findings, we proposed that glutamine deamidation of Nedd8 prevents this conformational change, resulting in Cullin E3 ligase inhibition.
 
Four years later, a group from the US solved the crystal structure of the deamidated Cullin E3 ligase complex, and published a high profile paper, entitled “Gln40 deamidation blocks structural reconfiguration and activation of SCF ubiquitin ligase complex by Nedd8”. The paper came to the same conclusion as we did four years later. Yet, in the paper there was no acknowledgment of our study.
 
When I emailed the authors about this omission, one of the corresponding authors replied that somehow the citation of our paper got lost in the final version. However, it seems much more likely that it was convenient for the authors not to acknowledge our study to make their own work appear more novel. This of course is not an isolated case, and there are many examples where researchers try to “increase the novelty” of their findings by omitting references to previous studies.
 
On the other hand, I have to admit that I have been guilty of not acknowledging a highly relevant paper myself. Back in 2003, after the publication of our paper “Permeability transition in rat liver mitochondria is modulated by the ATP-Mg/Phosphate carrier”, we received an email from a researcher who pointed out that her lab had previously published an article that came to very similar conclusions and that we did not cite. In this case, I clearly overlooked the reference and my only excuse is that back in 2003, it was much harder to find relevant references than it is now (or was in 2015).
 
Therefore, I believe that nowadays not citing others’ work due to ignorance is no longer a valid excuse, given the ease with which scientific information is accessible and searchable. Given how readily accessible prior scientific literature is, it is also easy to determine whether an author has failed to acknowledge others.
 
This of course, is very different in AI-assisted research. With generative AI, idea and knowledge generation are so complex that it becomes very difficult to acknowledge specific sources. One could argue that while plagiarism by AI is not intended, it is part of how these systems function and are being trained. Because there is no single source that can be attributed to an AI-generated idea or solution, it is also difficult to determine what does and does not count as plagiarism.
 
In addition, for me, the AI Scientist highlights another problem. The goal pursued by the AI scientist appears to violate one of the guiding principles of AI-assisted research. AI should help humans to solve questions. Instead, the main point of the AI scientist seems to be to just make more discoveries and publish more papers.
 
There has already been an explosion of scientific publications that nobody reads and that do not contribute to the scientific enterprise. The reason why these papers do not make significant contributions is partly because of the low scientific quality. But part of the reason is also that these papers do not address questions that scientists care about.
 
As I have discussed previously, Botvinick and Gershman have argued that “two core aspects of scientific work should be reserved to human scientists,” namely deciding on “what problems to work on and that human understanding remains the goal of science” (Binz et al. 2025). I believe that both of these guiding principles are being ignored by trying to develop a fully automated research engine such as the AI Scientist.
 
 
HIGHLIGHTS FOR WEEK OF 29 SEPTEMBER – 5 OCTOBER
 
 
My impossible goals
 
In March 2018, inspired by Thomas Frank’s blog and podcasts, I came up with an “impossible to-do list”, a compilation of ambitious goals that would be very difficult (or nearly impossible) to achieve.
 
Last week I received the confirmation email that I have qualified for the 2026 Boston Marathon. As such, I can cross off one of my impossible goals from my list (although I still need to complete the race to really complete the goal).
 
This of course feels extremely exciting. It also prompted me to look at my other goals, which back in 2018 I divided into work-related, developmental and personal achievement-related goals.
 
Firstly, I noted that some of my impossible goals are no longer relevant. For instance, I have given up learning new languages. The main reasons for abandoning this goal are firstly that I realised that I do not enjoy learning languages, and secondly, that there are many other goals that now feel more meaningful.
 
Among my work-related goals, I have not met any of my discovery or publication-based goals:
 
-publish a paper in Cell Metabolism or PNAS
-discover the cause of galactosemia
-discover something important about Miro
-discover something important about Txnip
 
The only thing I have really achieved is to “knockout a gene using CRISPR”. With regards to my final goal in this category, to “have a lively and active lab”, it will probably be impossible to go back to my early days as an Assistant Professor, when I had a lab with over 10 graduate and undergraduate students. However, I now question whether I really want to.
 
I actually do enjoy now having a small, yet still pretty lively lab, with only undergraduates around. Having a small lab means I do not need to spend most of my time writing grants or doing grant administration. Instead, I have time to do other more meaningful and fun things, including doing my own experiments. As such, I did achieve my goal to have a lively lab in my own way.
 
With regards to my personal development-related goals, apart from the language goals that I have given up, I listed the following goals:
 
-become a successful blogger
-write a book on teaching
-write a book about personal improvement or a fiction book
-overcome external as well as internal anger
 
I feel proud that I have managed to keep my weekly blogging alive for the past seven years. Based on my current definition of successful blogging, i.e. being prompted to think about my life, my goals and my difficulties, to form opinions about society and the world, and to come up with new ideas, I have also been successful. I also realised that I enjoy not worrying about how many people have accessed my posts, which I think is one major advantage of a personal website as opposed to a social media page.
 
With regard to my book on teaching, I am still at the proofreading stage, which turns out to take a lot of time (and explains why outsourcing this task is very expensive).
 
What about anger management: I certainly feel less angry than I used to, and I also express less anger outwards. That is of course not to say that I never feel angry anymore.
 
When can I consider to have achieved my goal of overcoming anger or any other character flaw? I believe that I have achieved this goal, once being angry no longer interferes with my daily life and is no longer a major concern for me. By this standard, I have achieved my goal.
 
What has made me less angry? There are probably various factors, but the most important one is likely to be my mental state. Compared to the past, I now feel generally happier and more content, more rested, and less stressed, thanks to changing habits and eliminating sources of stress (including too many daily tasks).
 
The third category are personal achievement-related goals and my list included:
 
-eat less sweets: ACHIEVED (I in fact, no longer eat any sweets.)
-spend one month in a random place in Japan: NOT ACHIEVED
-spend one month in a random place in Thailand: NOT ACHIEVED
-know all places in Singapore: I have gotten to know quite a few places, but I think it may be good to save some room for discovery for after I retire. The same is true for the previous two goals.
-various running-related goals, including qualifying and finishing the Boston Marathon, which I all achieved (with the exception of actually finishing the Boston Marathon
-finish an Ironman Triathlon: NOT ACHIEVED
-become a successful athletics coach: Due to external circumstances, I have been unable to complete my coach certification. But at least the NUS staff running group coaching has been pretty successful.
 
Finally, my stated overall life goals were to “Improve my skills, knowledge and experience in research, teaching and personal development to be able to help others better”, and to “Stay healthy”. I can say confidently that overall, I have lived true to these goals.
 
Reviewing my “impossible to do” goals from seven years ago also prompted me to revise my list. When considering my future goals, I focussed on goals that I believe will really be meaningful to me one day when looking back on my life:
 
-discover something important in our research, something where I can say I made a significant contribution to Science
 
-finish my scientific review on the regulation of cellular oxygen concentrations by mitochondria, which summarises a concept to which I believe I have contributed significantly
 
-write and publish books
 
-finish an Ironman Triathlon (this currently sounds pretty “impossible”)
 
-spend one month in a random place in Japan and in Thailand, without taking my laptop: I believe that this will be a great means of self-discovery. I in fact would love to try this next year during my semester break.
 
-overcome my obsession with records and cd’s, which is difficult, because I really do enjoy them. However, I also realise that this “hobby” prevents me from living the way I truly want to live, which is not to be obsessed by material things. I currently have no clue about how to achieve it (but my previous goal may help). On the other hand, I have overcome many other things that have bothered me or created difficulties for me, which gives me hope.
 
With regards to my goals to make an impact on others (e.g. through coaching, teaching etc.), I now feel more open and relaxed than I used to. As long as I can do something that I enjoy and that is meaningful in some way, I would feel satisfied. And there are certainly plenty of opportunities for enjoyable and meaningful tasks.
 
 
HIGHLIGHTS FOR WEEK OF 22 – 28 SEPTEMBER
 

Priorities
 
A couple of weeks I attended a 5 hour workshop about guiding PhD students, organised by the Integrative Science and Engineering PhD (ISEP) programme. Even though I do not have any PhD students, I reasoned that the workshop could help me supervising undergraduate students as well.
 
The facilitator of the workshop focussed specifically on applying coaching approaches in PhD student supervision. What does that mean?
 
A coaching approach does not focus on giving advice or directions, but on helping students find their own solutions, primarily through questions. This approach lets students come to more awareness about themselves. It lets them see their difficulties in a different light, opening up new opportunities to bring about change or to experience difficult situations differently.
 
Although most of what we discussed was not completely new, I did gain some new perspectives.
 
The first was related to Attention Deficit Hyperactivity Disorder (ADHD).
 
First, I was truly surprised about how common ADHD is among students. In fact, there may be 5-10% of students who have ADHD.
 
This means that very likely I must have had students in my lab with ADHD. In fact, in hindsight I realise that some of my past students’ behaviour or mistakes were likely signs of ADHD.
 
Through the workshop, I also learned that people with ADHD do not necessarily suffer from both attention deficit and hyperactivity, but often gravitate towards either of the two. I also acquired some awareness on how to tell and find out if students may be on the ADHD spectrum, and on how to deal with it.
 
But most importantly, the workshop helped me to change my mindset to not immediately interpret student behaviour based on how I expect students to behave, but to consider that there may be reasons for why students behave in certain ways that I am not aware of.
 
The second insight was more of a personal realisation about coaching, related to how I can avoid conflicts when having difficult conversations. When discussing some of my past experiences, I realised how important it is to first come to an agreement that both partners are willing to discuss a certain issue before having a conversation. Likewise, it is important to ask for permission before giving any advice.
 
There were a couple of other things that were noteworthy about the event. Interestingly, the organisers invited an artist by the name of Cheryl T, who created wonderful personalised drawings for the participants. I chose the themes of mental self-care and partnership, in the spirit of the workshop.
 
I felt greatly inspired by Cheryl, who has evidently made her passion for the arts her priority, despite the fact that being a self-employed artist is far from a safe job with a stable income.
 
The other thing that was noteworthy about the event was that among the over 200 principal investigators in the ISEP PhD programme, there were only seven Profs who attended the workshop, and who evidently made student guidance and well-being a priority.
 
This brings me to an amazing article in Science about priorities. In the article, a PhD student by the name of Henry Henson describes how he chose mental well-being and happiness over what is commonly considered as success for PhD students.
 
Henry Henson writes: “I grew up in the United States with a clear sense of what a “successful” science career should look like. I might not have admitted it openly, but I believed that the right pedigree—a well-known university, a prestigious Ph.D. program, a respected adviser—was what determined whether someone would be taken seriously as a scientist.”
 
Spending time as an undergraduate exchange student in Denmark started to change his mind. He discovered how people in Denmark prioritised their personal lives to have “… space for hobbies, relationships, and simply living”.
When it came to deciding where to apply for his PhD, he faced a difficult choice. Should he choose the conventional path, i.e. go for a prestigious programme and prioritise hard work for the next five or so years, which is considered most likely to lead to professional success? Or should he define success differently for himself, by choosing a culture that, as he describes, “… insists on work-life balance, expects people to leave the office at a reasonable hour, and treats evenings and weekends as personal time”?
 
In hindsight, he writes, it was an obvious decision. But at the time of making the decision, it did not feel obvious at all, having grown up with the belief that the more work we get done, the more we are being valued in society.
 
Henry Henson writes that one important criterion for what he considers success as a scientist is “whether you can sustain the curiosity that brought you into science in the first place”. This undoubtedly requires some periodic distance from doing science.
 
On the other hand, maintaining a work-life balance is not by itself a guarantee for being passionate about our work. In the case of scientists, it requires that we are excited about scientific research in the first place, and take steps to maintain this excitement. Paraphrasing something I once heard Steve Jobs say, when we notice for several days that we no longer look forward to and enjoy our daily tasks, it is time to make some changes.
 
 
HIGHLIGHTS FOR WEEK OF 15 – 21 SEPTEMBER
 
 
Race Against Cancer (in the heat)
 
This Sunday our staff running team took part in the Race Against Cancer at East Coast Park. In fact, we had about 25 runners from our group who joined the 10 km or the 15 km category! In fact, some of them completed their first race.
 
I enjoyed my 10k race (especially the moment I passed the finish line), and I was happy that I managed to run a consistent pace throughout.
 
It was also a gorgeous sunny day. And so after the race, I slowly jogged and walked back to Marina Bay, eventually stopping at Satay by the Bay foodcourt for some fruits and tea.
 
However, not everyone shared my praise of the weather. In fact, many runners complained that the race started very late and as such, the course was very sunny and hot.
 
It is of course not really surprising that most runners prefer cooler and cloudy weather. But among Singaporeans, there seems to be a general aversion to sunny and hot weather. For instance, whenever there have been a couple of rainy and gloomy days, the hosts of the yah lah but podcast Terence and Haresh, to my amazement, start discussing how wonderful the weather has been lately.
 
Why do I feel differently?
 
Firstly, probably because I work in an air-conditioned environment and have a choice as to when I want to go out. I am of course aware that many people do not have this luxury!
 
Secondly, I feel that based on my experience, it is possible to get our body accustomed and adapted to hot weather, by actually going out and being active while it is hot. Of course, it also helps to wear appropriate clothes.
 
Finally, I believe that a lot has to do with our mindset. If we think we hate the heat, we will. If we think sweating is bad, then hot weather will make us uncomfortable. But we can also choose to enjoy the feeling of sunbeams on our skin and the brightness of a sunny day. We can accept that sweating is a normal thing and not worry about it.
 
That said, I do have to admit that I find it hard to adopt a similar mindset when I visit my parents in Germany in the winter. While in Germany, I prefer to stay indoors. When I do try to go out, the cold often feels painful, even though my younger self used to love cold weather.
 
I guess it shows that adapting both our body and our mindset to the climate that we live in is important. However, adapting both our body and our mind does not happen automatically, but requires an active role on our part!
 
 
 
HIGHLIGHTS FOR WEEK OF 8 – 14 SEPTEMBER
 
Our semester break lab team, consisting of Kester, Hong Xi, Luke, and our lab manager Yee Liu
 
In recent weeks and months I have mostly written about my teaching and AI-related topics. Yet, most of my time was actually spent doing research.
 
Shown above is our semester break student team, including our amazingly enthusiastic and talented summer student Kester, Hong Xi, who has been wrapping up some work before embarking on his PhD, and Luke, who has also been trying finish her work before starting her first research job.
 
We now have a new talented student team doing their projects. Having all these students around has been amazing, and I feel grateful that my job allows me to wear so many hats and engage in many different activities!
 
 
HIGHLIGHTS FOR WEEK OF 1 – 7 SEPTEMBER
 
 
Technological progress – Stuck in 2012
 
During a recent online conversation with my parents, my mom reflected about how complicated life has become, and about how much easier and better life used to be when there were no smart phones, no constant new messages, no continuous bombardment with advertisements, no temptations to constantly check out new things and no endless opportunities to waste our time.
 
This got me thinking, and I came to realise that I not only agree with this sentiment, but also that I have been resisting much of the technological progress over the past 15 years.
 
In fact, I feel that at a technological level, I am kind of stuck ca. 2012. And if I had a choice, I would wish that most technological progress would have stopped there and then.
 
Here are some examples:
 
Phones
 
I got my first mobile phone in 2007, because at that point possessing a phone seemed unavoidable in order to be functional in society. In those days, however, the phone was primarily a calling and messaging device. And for the most part this is how I still use my phone (with the exceptions that I now also use it as a book reading device and alarm clock).
 
Listening to music
 
If I am not indoors where I can listen to physical cd’s and records, I exclusively use iPods. For me, the invention of iPods has been a groundbreaking technological advance. The reason why I still prefer my iPod over my phone is not only that it is very small, but also because it only has one function, listening to music (or listening to podcasts). As such, there is no chance that I will get distracted doing other things.
 
 
Only using my iPods also means that I have not been using bluetooth earphones (and am likely not going to do so in the foreseeable future). Bluetooth earphones and headphones, similarly to constantly improved smart phones, laptops or cars, are all things that all innovations that are nice, but are not really necessary (unless such innovations reduce the environmental footprint of their production, usage or disposal). Cars 10 years or 20 years ago were perfectly convenient.
 
As such, all these new developments do not fulfil any real needs, but only contribute to keep our economies growing – for the sake of continued growth. But is that really in the interest of mankind?
 
Indeed, the concept of “degrowth” has become more popular in recent times. I am obviously not an economist, but it seems that achieving true degrowth may be difficult, given that according to Peter Barnes from the Centre for Humans & Nature, “growth is in the DNA of businesses”.
 
But what Peter Barnes in his article suggests might be possible is to “curb bad growth and encourage the good kind”. Here, bad growth relates to growth that hurts the environment or exacerbates social inequality.
 
One way to implement “good growth-promoting” policies is, for instance, to set limits to “carbon-based fuels, methane, nitrogen, phosphorus, water use, and tree-cutting”.
 
Back to the list of new developments that I can live without. The next one is …
 
GPS
 
I have never used GPS on my phone to find my way. I do have a GPS running watch, but I only use it to measure distances to plan some training sessions. When I run on my own, I know very well how far I have run by just taking my time. Many times I don’t even do that but just enjoy running without worrying about my completed distance. Even in races I like to run based on how I “feel”, rather than feeling stressed about checking my target pace on my GPS watch.
 
When it comes to using GPS as a navigating tool, I think that learning how to find one’s way around a city without using GPS, if necessary using a map, is part of truly getting to know a place. Following a map rather than GPS directions helps us to notice details we might miss otherwise and get a feeling for the infrastructure and character of a place.
 
Computers
 
Just like for iPods, the invention of laptops was revolutionary. However, there came a point where the improvements became more and more incremental. I still use a laptop from 2012 for my music. I do keep getting new laptops for my work, but mainly because my old laptops are becoming incompatible with technological advances, which of course is part of the business model of many IT companies.
 
 
Reading
 
I do use my phone to read books. If this was 2012, I would probably use a Kindle device. For me, there is no real difference between the two, mainly because I receive very few distracting messages on my phone and because I am not tempted by any social media apps on my phone. If distractions were a problem, then I would change my reading device.
 
Music media
 
I still like and prefer physical music media as much as I have always done. And it seems that increasingly more and more people do. This kind of makes me somewhat hopeful that one day more people will reject unnecessary (or detrimental) technological advances.
 
On a related note, one obvious victim of the technological advance are the musicians themselves. I recently received a newsletter from singer-songwriter Katie Malco, who described her frustrations about how music has ceased to be judged by its true quality. What determines whether music becomes popular depends on whether it fits certain algorithms that feed it to customers, often based on superficial criteria and some lowest common denominator.
 
I can fully understand how this situation, where one literally stands no chance to succeed without trying to appeal to music platform algorithms, is tremendously discouraging, even causing self-doubt and feelings of rejection.
 
But it is not only the discouragement. There are also practical consequences of not being able to make a living and, in Katie Malco’s case, having to work two other jobs in order to make ends meet.
 
A final point raised by the singer is that in order to make it in the arts these days, one needs time, money, and resources. This in turn spells the end of diversity, as people from underprivileged backgrounds simply do not have the means to succeed. As such, it seems hard to imagine how new musical genres could still evolve in the current times.
 
Teaching
 
Finally, what about technology in teaching? In my early teaching days, the only technology that I used in my classes were old-fashioned clicker class response systems. They had a number of advantages, including how easy they were to use and that no wifi was required. But they also had the major disadvantage of being physical, which meant that students could forget to bring them or forget to return them to me at the end of the semester.
 
 
Needless to say, I did have to go with the changing times and adopt new polling platforms, coursewares and learning management systems. However, theoretically, I could do most of my in-class activities involving team-based learning and various group activities using clickers and some simple online submission platform.
 
Nonetheless, in the area of teaching technological progress did make things easier. And because there is continuous technological progress in our society, I cannot escape from it when teaching students, but should in fact embrace these changes.
 
Apart from new education related technology, what new things that did not exist in 2012 do I use by choice? Only a few things come to my mind.
 
First on the list are running shoes. Running shoe technology has helped to reduce the impact of running on our joints immensely.
 
The second thing is solid-state drives (SSD) and cloud storage, which have made data much safer.
 
The last one is probably cloud document sharing, although this is something I could live without.
 
Why do I find it important to think twice before adopting “unnecessary technology”?
 
One reason is that our society has reached a stage where our lives are unlikely to be significantly improved by more technology. To the contrary, there are growing concerns (and increasing evidence) that over-reliance on technology, including AI, can even lead to cognitive decline.
 
Furthermore, adopting new technology creates new dependencies. This is true in practical terms, where more technology means we have more things that we rely on, that we need to maintain, and that can potentially break.
 
But this is also true in psychological terms. Engaging with new technology generates new “memories” of exciting encounters that make us want to continue using these technologies in order to create similar experiences. This naturally comes at the cost of doing things that do not involve technology and that are arguably much more valuable.
 
As such, I often do find it upsetting that it has become impossible to function in our society, in both professional and personal areas of life, without technologies like our smart phones, whether it is requiring a smart phone to access essential work-related websites or health services.
 
But this also highlights another fact: Once new technology has arrived and been implemented, it is almost impossibly to roll it back.
 
 
HIGHLIGHTS FOR WEEK OF 25 – 31 AUGUST
 
How good is AI really?
 
As widely publicised, OpenAI has recently announced its latest chatbot version GPT-5. This new chatbot version can “provide PhD-level expertise”, attesting to the trend of ever more powerful AI tools with extraordinary potential.
 
There is indeed continuous news of amazing performance of generative AI tools. A recent example are the achievements of DeepMind and OpenAI in the International Mathematical Olympiad. Both systems achieved a gold medal result, obtaining score of 35 out of 42 points.
 
Among the extremely talented 630 high school students from 110 countries, DeepMind and OpenAI scored within the top 5%. This seems truly amazing.
 
Yet, as Gary Marcus points out in his blog post, “there were 26 better scorers, including 5 who got full marks.”
 
Thus, the ability of AI did not exceed that of the most gifted high school students. As such, using AI could make students highly accomplished, but not truly world-class.
 
And I believe that this is where an important problem lies. Becoming world-class requires a high level of motivation to keep practicing. However, nobody starts out being interested in math in order to be a world-class mathematician.
Students who are passionate about math initially enjoy the process of solving challenging problems and getting to the correct answer. As a result, they want to get better at it. But what motivation is there left for someone to become good at math and how much is someone able to enjoy solving challenging problems if even without trying hard one can be almost world class.
 
The same is true in the arts, where highly creative and accomplished artists clearly outperform AI. Yet, the fact that AI can easily produce artistic output that is well above average discourages many from becoming artists. Why waste one’s time on learning artistic skills that can be easily done by an AI tool?
 
This brings me to a recent study by two authors from University College London and the University of Exeter, Anil Doshi and Oliver Hauser, who examined how generative AI affects human creativity.
 
In their study, the authors asked the 293 participants to write a short, eight-sentence story that is “appropriate for a teenage and young adult audience”. For this task, the participants were divided into three groups, one human-only group, in which the writers were not allowed to use any AI tools, one group that could use a chatbot (OpenAI’s GPT4) to provide one three-sentence starting idea and one group that could use the chatbot to generate up to five ideas. The vast majority of participants in the two AI groups chose to enlist the chatbot for help.
 
The stories written by the three groups were then evaluated by a separate group of participants according to the novelty and usefulness of the stories. According to the authors, novelty and usefulness are two important characteristics of creativity. Novelty in the context of the study relates to “the extent to which an idea departs from the status quo or expectations”. Usefulness relates to “[capturing] the story’s appropriateness for the targeted audience, feasibility of being developed into a complete book, and likelihood of a publisher developing the book”.
 
The study came to a number of very interesting conclusions:
 
Firstly, having access to the chatbot suggestions resulted in increased average novelty and usefulness of the stories.
The authors also asked all participants to complete the divergent association task (DAT), a measure of an individual’s inherent creativity. This allowed them to assess whether generative AI helps more or less creative writers differently.
Notably, the authors concluded that “… the gains from writing more creative stories benefit some more than others: Less creative writers experience greater uplifts for their stories …”. In contrast, the study finds no evidence that the use of chatbots improved the stories of the most creative writers.
 
Another important finding of the studey was that when people use AI tools to generate ideas for their stories, the stories become more similar to each other.
 
Based on these findings, the authors concluded:
 
“These results point to an increase in individual creativity at the risk of losing collective novelty. This dynamic resembles a social dilemma: With generative AI, writers are individually better off, but collectively a narrower scope of novel content is produced.”
 
For me, the study highlights two important problems.
 
Firstly, it shows that AI challenges poses a threat to our merit-based societal order. In the past, if we work hard on something, we are being recognised for our accomplishments. In contrast, these days, where many people post AI-generated articles and “opinions” on platforms like LinkedIn, this merit and quality-based system increasingly collapses.
 
Secondly, as pointed out at the beginning, generative AI tools prevent the development of skills, not only by depriving people of practice opportunities, but also by robbing people of their motivation to learn difficult things.
 
One way to address these concerns might be to introduce much more stringent requirements to declare whether or not, and if how, generative AI has been used in any domain of public life. Of course, it may be easy for people to ignore such a requirement and difficult to prove that someone has done so. Nonetheless, implementing a formal obligation to declare the use of AI creates an ethical standard.
 
For example, while it may be difficult to detect whether someone has committed intellectual theft (i.e. plagiarised other people’s work), everyone knows that it is wrong to copy from other sources without acknowledging them. As such, everyone expects that, if found out to have plagiarised from others, he or she will be held responsible.
 
 
HIGHLIGHTS FOR WEEK OF 18 – 24 AUGUST
 
 
Is AI simply a bubble that is going to burst?
 
There appears to be dramatically varying views on the future potentials of generative AI. On the one hand there are people like Ex-OpenAI pioneer Ilya Sutskever, who in his recent convocation speech at the University of Toronto said:
 
“The day will come when AI can do all of the things that we can do, not just some of them, but all of them. How can I be so sure of that? The reason is that the brain is a biological computer. … Why can’t a digital computer, a digital brain do the same things?”
 
On the other hand, there is scientist, author and entrepreneur Gary Marcus, who thinks that the AI bubble will burst within the near future.
 
Gary Marcus gives the following reasons for this prediction:
 
• “The current approach has reached a plateau”
• “There is no killer app”
• “Hallucinations remain”
• “Boneheaded errors remain”
• “Nobody has a moat”
• “People are starting to realize all of the above.”
 
He also highlights that the problem of AI generated hallucinations and errors is likely to remain. This is because large language models are unable to “sanity-check their own work” and that hallucinations and errors are in fact “an inevitable byproduct of large language models”.
 
In an article in Science entitled “Why AI chatbots lie to us“, computer scientist Prof. Melanie Mitchell argues that the reasons why chatbots inherently produce errors is firstly their role-playing behaviour.
 
During the pretraining, large language models are being trained on vast amounts of information generated by humans. When chatbots are then confronted with user requests, they “generate language and behavior in the context of a given role, where the context is set by the user’s prompts.”
 
As a result, the chatbot does not generate a truly objective answer, but one that reflects the role that the chatbot assumes.
 
One may then ask why can’t we simply tell the model to express the truth, the consensus view or a wide range of plausible opinions? I believe that one reason is likely that AI systems cannot judge or evaluate claims. For instance, when I conduct literature research or evaluate scientific results and conclusions, I refer to and rely on authors that I know and trust in cases of uncertainty. I also look at a research result in the context of the study in which the result is reported.
 
I believe that by and large AI is not able to make such judgements in a reliable manner, but takes scientific claims pretty much as face value. As such, to be convincing, AI has to take up a certain role and pretend to be as authentic as possible.
 
According to Melanie Mitchell, another reason “Why AI chatbots lie to us” is the posttraining process that these models undergo. This posttraining procedure is called reinforcement learning from human feedback, where humans give feedback on a model’s responses to different prompts. Melanie Mitchell points out that “Because humans seem to prefer responses that are polite, helpful, encouraging, and that don’t contradict the user, the systems learn to be overly sycophantic”, i.e. generate responses that the user likely agrees with, “sometimes fabricating actions and responses rather than disappointing the user by admitting that they cannot perform the task.” In short, as a result of the posttraining, AI tries to please the user.
 
Apart from the persistent errors and hallucinations, there is another problem that challenges the vision of Ilya Sutskever that generative AI will take over more and more spheres of people’s professional and personal lives – the trust of the users.
 
While AI companies are trying to build agents that can function independently to carry out complex tasks, I wonder if people will ever completely trust these AI agents to fully plan their holiday, manage their personal accounts, develop new products and generate new businesses, or if they even want to do it.
 
This likely lack of trust is grounded in reality. Indeed, a recent Nature commentary (“We need a new ethics for a world of AI agents“) points out that “AI-safety researchers have long warned about the risks of misspecified or misinterpreted instructions, including situations in which an automated system takes an instruction too literally, overlooks important context, or finds unexpected and potentially harmful ways to reach a goal.”
 
As such, what Melanie Mitchell concludes in her article sounds very sensible:
 
“Some researchers, including me, believe that the risks posed by these problems are dangerous enough that, in the words of a recent paper, “fully autonomous AI agents should not be developed.” In other words, human control and oversight over these systems should always be required. But such restrictions would likely run counter to the commercial interests of many AI companies”
 
In the end, I am unsure whose vision (or lack thereof) with regards to the future of AI will come true, the one where AI will dominate all spheres of our lives or the one where much of the hype around AI will implode. But it seems to me that as time has passed, technological progress has increasingly changed from mainly benefiting humans to having many disadvantageous effects on humans, from filling important human needs to creating new needs. It seems that in the past technology has helped us to do what we do better. Now it lets us do things that we never imagined we would do.
 
Technology saves us time, but people do not actually have more time than they used to. Our gained free time is waisted on things that we could not do in the past, but that do not really make us happy.
 
The way I see things moving forward is that AI will expand and infiltrate ever more areas of our lives. But there will also be more people who will resist this trend, who think that continuous growth and progress may not really be in the interest of humanity, who recognise the cycle in which our growth and progress leads to ever more problems that we then try to solve with more growth and progress, and more sophisticated technology.
 
Perhaps the day will come where “non-AI generated” becomes a selling point rather than a sign of being behind the times.
 
 
HIGHLIGHTS FOR WEEK OF 11 – 17 AUGUST
 
The worrying aspects of AI, Part 2
 
As discussed in detail in part one of this post, it seems to me that as a society we too easily accept the sentiment “Using AI = forward thinking; not using AI = being stuck in the past”.
 
The problem with this mindset is that it discourages us from considering whether the usage of AI tools is actually advantageous.
 
In this regard, the recently published UK government document “Guidance on the Impact Evaluation of AI Interventions” seems very relevant.
 
The document tries to assess “the outcomes of an [AI-driven] intervention with the aim of establishing whether, to what extent, how and why an intervention has resulted in its intended impacts”.
 
The document highlights that the impacts of an intervention may differ for different groups of the population. For instance, an AI-driven intervention may save costs for a company but at the same time result in a worse customer experience. Or the intervention may improve the experience for some groups of customers, but worsen that of others.
 
Furthermore, when assessing impacts of AI interventions, there are not only the intended impacts to be considered, but unintended outcomes as well. Although the unintended impacts tend to be more worrying, it seems that even the intended outcomes are generally not being well assessed.
 
According to the UK government document, there are three main types of impact evaluation methods: experimental, quasi-experimental and theory-based.
 
Experimental methods are randomised controlled trials, where for instance subjects are randomly assigned to groups that complete a task by either utilising AI or by not using AI. The outcomes are then compared by using statistical tests.
 
Quasi-experimental methods also aim to establish a cause-and-effect relationship between an AI intervention and an outcome, but use statistical techniques to identify a comparison group similar to the treatment group but unaffected by the intervention.
 
Unlike experimental and quasi-experimental methods, theory-based methods do not aim to give a precise quantification of the impact of the intervention. An example of a theory-based method is outcome harvesting, where one collects evidence of change, for instance increased sales or improved student performance. After identifying a given change, one tries to determine what has caused or contributed to the change.
 
The need for outcome evaluation of AI tools was also highlighted in a recent commentary by Michael Brooks in Nature, entitled “Is your AI benchmark lying to you?“.
 
The article points out that generative AI research tools often run short of expectations, which can cause researchers to waste much time, and highlights a number of reasons.
 
One reason is that AI models are often not tested based on the right benchmarks, which correspond to the applications to which the tool will actually being used, or they are tested using flawed methods.
 
Another reason is the inherent bias in the testing process. The article quotes scientist Nick McGreivy as saying “The same people who evaluate the performance of AI models also benefit from those evaluations …”
 
Michael Brooks also cites theoretical computer scientist Mengdi Wang, who points out that AI tools perform well when being fed the right questions, and these performances are often presented to the public. However, when the problems are changed to make them different from the way the model has been trained, then the performance frequently drops.
 
A good example, which I found in the blog of scientist, author and entrepreneur Gary Marcus, is the example where a man, a wolf, a goat and cabbage need to get across a river:
 
“A farmer wants to cross a river and take with him a wolf, a goat, and a cabbage.
There is a boat that can fit himself plus either the wolf, the goat, or the cabbage.
If the wolf and the goat are alone on one shore, the wolf will eat the goat. If the goat and the cabbage are alone on the shore, the goat will eat the cabbage.
How can the farmer bring the wolf, the goat, and the cabbage across the river?”
 
In order to get everything across the river, multiple steps are required.
 
ChatGPT is likely familiar with this problem through its pretraining process. However, when confronted with a related, but slightly different problem, ChatGPT produces a completely non-sensical answer:
 

This highlights a major problem – even though AI performance is often very impressive, if it is sometimes wrong and we do not know when, then the tool may not be very useful and could potentially do more harm than good.
 
Translating all this to my sphere of influence as a teacher, it means that I first need to ask myself the question for what purpose I want to introduce AI tools into my teaching. Secondly, I need to evaluate if the outcomes meet my expectations.
 
For instance, as I have previously discussed, in my courses we have evaluated the use of ChatGPT as a research tool, a tool for scientific writing and to propose experiments to test scientific hypotheses. Only in the latter case did ChatGPT prove truly helpful to students, and I will hence continue to incorporate related exercises into my teaching.
 
Considering whether AI tools can help me in my teaching has also led me to the conclusion that AI is not truly useful in helping me personally to make my teaching more effective with regards to the outcomes I want to achieve. I will of course continue to evaluate new developments. At the same time, I will try to introduce new ways in which students use and evaluate various AI tools.
 
 
HIGHLIGHTS FOR WEEK OF 4 – 10 AUGUST
 
 
The worrying aspects of generative AI
 
I am glad that all of my undergraduate students from the past year have now started their postgraduate studies or have found jobs, especially given how competitive the scholarship and job markets are these days. I was amazed to find out that when one of my students applied for an industry job, the company utilised an AI agent for the initial screening of the job candidates. Notably, my student had a very good experience with the AI interview process.
 
Undoubtedly, using generative AI for recruitment has advantages. Using an AI agent is more efficient to screen large numbers of candidates. The AI agent is more flexible and can schedule online interviews at any time that is convenient for the applicant. Using an AI agent can potentially also reduce biases (although the training process of AI models can also establish subtle biases).
 
Apart from recruitment tasks, there are of course numerous other tasks where AI tools are being explored or where its usage is being evaluated. Examples of potentially meaningful AI applications include triage of grant submissions to make the grant review process more efficient, triage of patients in the emergency room to reduce waiting times or support of patients with chronic diseases to improve their long term management.
 
At the same time, there are also many applications of AI that provide little value and whose main purpose appears to be to make headlines. Examples include how well generative AI tools do on standardised tests or how they can beat any humans at games like chess or Go. If anything, these applications undermine educators who want their students to learn or rob people of the motivation to master challenging skills.
 
Of course, AI tools can also help people to gain skills. They can help students to learn better or help people to become better at playing chess or Go. But if that was the goal, then why do AI companies focus on what AI can achieve rather than on how it can help humans? After all, evidence of good teaching does not come from how much the teacher knows, but from how his or her students perform.
 
In short, it seems to me that the way AI is being presented is often as merely a solution in search for a problem instead of a solution to a real world problem.
 
A case in point is an information session that I attended recently (online), in which our university IT department teamed up with Google Singapore to introduce new tools that are available or that are on the horizon.
 
For instance, the representative from Google Singapore introduced NotebookLM, a tool which allows one to upload multiple documents in various formats. We can then use NotebookLM to interact with these documents. Thus, when prompted, NotebookLM will use artificial intelligence to perform research based on our documents or produce summaries.
 
The presenter gave the example where NotebookLM was provided with a website URL and then produced a podcast that summarises the contents of the website. The podcast sounded amazingly real. The presenters were even able to respond to questions by listeners in an interactive manner!
 
Nonetheless, this demonstration missed a critical point for me. What I value about podcasts are not well presented summaries, but a creative interesting storyline or truly unique personal opinions. I would argue that thus far generative AI is not able to deliver that.
 
Of course, I could imagine applications where we upload data and then prompt NotebookLM to do research based on these data. The AI output could provide new leads or hypotheses. These can then be investigated in future research where we design specific experiments to investigate the leads or to test the hypotheses using different datasets. However, like so many of generative AI’s applications, NotebookLM’s research capacity could also be misused in non-ethical research to mine data and selectively report cherry picked interesting results.
 
While the use of AI tools to come up with new leads or hypotheses may be appealing to some researchers, it is not how I personally like to work. The way I grew up as a scientist is to take interesting questions that have not been solved and try to find an answer through the process of coming up with potential hypotheses. Outsourcing the process of generating questions and hypotheses takes much of the fun out of research for me. It also equates to using AI as a solution in search of a question, instead of the other way around.
 
Another interesting new development described in the presentation is Gemini Diffusion.
 
Diffusion models are commonly used in generative AI image creation tools such as Dall E2. In these applications, noise is first added to an initial image before the system gradually “denoises” the image to produce highly accurate output.
 
Gemini Diffusion applies an analogous process to data to result in a new type of large language model that is not based on the traditional transformer technology.
 
The Google Singapore presenter demonstrated how Gemini Diffusion can be used to quickly write a program for a xylophone application or to create browser-based games. Again, I would prefer that the demonstration would be focussing on useful applications, rather than on applications that discourage people from learning musical instruments or encourage them to waste their time playing games.
 
Why do AI developers focus on these headline grabbing applications?
 
In a BBC article from this week about the launch of the latest ChatGPT version, entitled “OpenAI claims GPT-5 model boosts ChatGPT to ‘PhD level’ “, I found one potential answer.
 
The article quotes Professor Carissa of the Institute for Ethics in AI as saying: “These [new Large Language Model] systems, as impressive as they are, haven’t been able to be really profitable, … There is a fear that we need to keep up the hype, or else the bubble might burst, and so it might be that it’s mostly marketing.”
 
Prof. Véliz also points out that current large language models can only mimic – rather than truly emulate – human reasoning abilities, and suggests that GPT-5’s launch may not be as significant as its marketing may suggest”.
 
Another AI application that was briefly introduced in the presentation are the Android XR Gemini glasses.
 
These AI glasses are equipped with a camera, microphones and speakers. They work in sync with a smart phone, allowing one to access phone apps remotely (presumably via the microphone). According to the Google blog, “… an optional in-lens display privately provides helpful information right when you need it. Pairing these glasses with Gemini means they see and hear what you do, so they understand your context, remember what’s important to you and can help you throughout your day.”
 
As for me personally, I would not even want access to my phone while I am walking. I not only find it difficult to see any use for this invention, but I find the device blatantly destructive. In my view, AI glasses are bound to distract people even more from noticing their environment, from having time to enjoy real live and to reflect. Instead, it will increase time spent on things that serve the companies who benefit from our attention.
 
In conclusion, while the presentation by Google Singapore was impressive, I could not help asking what need these tools meet for me. I can see why the basic research around text diffusion models is important as it may be able to address gaps in current large language models. On the other hand, I currently see no real usage of NotebookLM that makes me more efficient or enjoy my work more. Perhaps this is because the advertised ways of using these applications do not focus on types of usages that are actually useful.
 
One thing that AI apps or devices like NotebookLM, Gemini Diffusion and Android XY glasses do do is making things easier for my students to complete my assignments, of course without actually learning more.
 
This then brings me to the final development in the presentation, the SynthID detector. According to Google’s Deepmind website, “SynthID embeds digital watermarks directly into AI-generated images, audio, text or video. The watermarks are embedded across Google’s generative AI consumer products, and are imperceptible to humans – but can be detected by SynthID’s technology.”
 
The Google Singapore presenter called this a potentially “game-changing” tool in the education sector. However, it is important to remember that the reason why such tools have become necessary in the first place is due the disruptive impact that the availability of generative AI has on student learning. As such, another way to look at SynthID, at least in the education sector, is as a very questionable fix to the damage that AI has done.
 
It seems astonishing that with all its impacts on learning, the job market, the trustworthiness of publicly available information, scientific publishing, energy usage and climate change, and of course the potential damage of false information, there is so little policy discussion of whether we are moving in the right direction and whether new AI tools or applications are actually in our interest and beneficial.
 
In this regard, it is important that the UK government has published a new document entitled “Guidance on the Impact Evaluation of AI Interventions”, as highlighted in a recent article in Nature, which I will probably discuss in another post.
 
 
HIGHLIGHTS FOR WEEK OF 28 JULY – 3 AUGUST
 
 
The Anxious Achiever: Is People Pleasing Hurting Your Career?
 
The host of “The Anxious Achiever” podcast is Morra Aarons-Mele conversation with therapist and human relationship expert Kathleen Smith.
 
This week I listened to an interesting podcast about relationships, communication and leadership. In the podcast, Morra Aarons-Mele, host of the “The Anxious Achiever” podcast, discussed various points related to these topics with therapist and human relationship expert Kathleen Smith. And I must say that I found a number of these points very insightful.
 
For instance, when we are in a conflict situation with another person, we usually think that we have to resolve the conflict through discussion with the other party, which often seems like a scary thing to do. This seems especially important if we think that the main culprit of the conflict is the other person (which is what we tend to think most of the time). However, Kathleen Smith pointed out that in fact anyone involved in the conflict can effect change, not only the person who seemingly causes most of the conflict. Rather than discussing through a conflict, it often makes a huge difference if we simply change our own behaviour.
 
For instance, if gossip is a major problem in our work environment, it can have a big impact if we ourselves refrain from gossiping. It is likely that the other people will notice, and this may change their behaviour in turn.
 
This course of action requires changing our mindset from wanting the other person to change their behaviour to changing our own.
 
Kathleen Smith also pointed out that it is helpful to not always expect that others will react in the way we want them to. For instance, my students sometimes react in ways that I do not expect by dismissing my suggestions without justifying their opinion. It is a useful skill to simply let it pass and assume that they probably have a good reason to do so. Whenever I do manage to be able to stay calm and quiet in those moments, it always turns out to be better in the long run. My frustration quickly passes, and I have avoided to invoke a conflict.
 
What is often in the way of following this course of action is my identity, i.e. my belief of being someone with experience and knowledge. But it is helpful to let go of this mindset. As highlighted in the podcast, it is also useful in those moments to focus less on on what I expect others to be, but more on how I want to be, which is being supportive instead of being directive.
 
But there was another point in the podcast that I found particularly illuminating, and this point relates to the importance of praise versus promoting the ability of others to praise themselves.
 
For the longest time I have been of the opinion that the best way to encourage others is by praising their effort, attitude or small achievements. What the conversation between Morra Aarons-Mele and Kathleen Smith made me realise is that this may not always be the best approach. In fact, others, for instance my students, can become dependent on my praise. Receiving praise may even become the main motivation.
 
Kathleen Smith suggests an alternative approach – asking growth questions, such as “What have you learned thus far?” or “How do you think that things have gone in regards to some specific aspect?”
 
The difficulty with this approach, as pointed out by the host, is that when we ask these types of questions, the people on the receiving end often think that we do not think that they have done well and that there must be something wrong with the way they have performed. This is especially true if they are used to us praising them.
 
So perhaps it is not good to establish that expectation in the first place. If we feel praise is in place, it may be better to voice it out at the end of our growth conversation.
 
 
HIGHLIGHTS FOR WEEK OF 21 – 27 JULY
 
 
How to love?
 
This week I listened to an interesting episode of the Ezra Klein Show podcast, in which Ezra talked with “buddhist psychiatrist” Mark Epstein. Mark Epstein is a trained psychiatrist, but also a practicing buddhist and a psychotherapist, and the author of several books.
 
Ezra Klein and Mark Epstein discussed several interesting questions, such as what meditation can achieve and whether it allows us to change our thoughts, or how to deal with mental trauma and with thoughts that we prefer not to have.
 
The topic of unwanted thoughts resonated most with me, because it is one thing that I have been thinking about a lot lately. These unwanted thoughts relate to judging others, rarely in a positive way, and often leading to unfriendly attitudes.
 
As such, I was rather disappointed when at the end of the podcast it became clear that although meditation can help us to take our negative thoughts less seriously and let us show up differently when they occur, meditation does not really allow us to change our thoughts.
 
However, after listening to the podcast for a couple of times, an unexpected potential solution appeared in the following quote from Mark Epstein’s book “Open to Desire””
 
Love is the revelation of the other person’s freedom.
 
What does this quote mean?
 
When we build a loving relationship with another person, we usually aim to find unity. We expect to agree on many things, complement each other and become close to each other. When we then realise that the other person thinks or acts differently from our expectations, holds back certain parts of herself or himself or wants to retain some freedom, this initially comes as a disappointment.
 
However, what the quote expresses is that true love means realising the other person’s freedom, as a result of which our initial disappointment eventually turns into release of the other person. This release does not mean that the relationship becomes less loving, but in fact more loving.
 
If I look at loving relationships that I have had in the past or still experience, I find that this is indeed true. For instance, when it comes to my former partner or my current relationship with my parents, I have been or I am much more accepting towards any differences in thoughts or actions, in a way that I would not be with strangers.
 
How does this help to judge people with whom I do not have a loving relationship differently?
 
I realise that the reason why I get upset about others is ultimately because I want them to act as I would expect them to, in the way that I would act. An alternative approach would be to apply the same principle of loving people without expecting them to be like me. This would allow me to grant others the freedom to have their own values as a result of their different backgrounds, and to acknowledge that they may encounter their own struggles with things they want to overcome. As a result, I might be able to show up differently in my everyday life and feel more peace and happiness.
 
 
HIGHLIGHTS FOR WEEK OF 14 – 20 JULY
 
 
What exactly determines whether professors are successful? – PART 2
 
A few weeks ago I discussed the topic of what determines the success of new assistant professors in their research. In my previous post In highlighted that various studies have collectively identified three critical factors, clarity of expectations, finding balance, and collegiality and collaboration.
 
Since then, I have read the Career Pathways Collection, a very interesting series of articles published in the journal Nature Metabolism, in which various young investigators shared their lessons when trying to establish their own research programme and research groups.
 
When I was a young Assistant Professor, I did not think much about the various points made by these authors. Instead, I always tended to think about the task at hand without questioning whether my strategy was actually likely to be successful. When I look at young researchers, I often see the same tendency to focus on the current demands. As such, the advice expressed by the various authors is a good reminder to be conscious of our conduct, especially if our position offers some level of freedom of direction.
 
The first aspect that really resonated with me was to focus on one major research topic in order to achieve a significant accomplishment and build a reputation.
 
Lawrence Kazak writes: “[In Bruce Spiegelman’s lab] … I learned the importance of focusing on one problem at a time and trying to solve it to the highest molecular resolution of my ability. … A single manuscript cannot contain an entire body of work. However, a multitude of papers, each building upon the findings of the previous ones, is a style of science that I have found to be incredibly satisfying and an approach that my trainees also appear to take satisfaction in.”
 
And in another quote from his article, he writes: “”My advice for late-stage postdocs or early faculty is to choose a biological problem that is important to you and to focus on solving one problem at a time. I would strongly advise against spreading yourself thinly, especially at the start of your career.”
 
In contrast, having worked throughout my predoctoral and postdoctoral training on numerous different topics, I tried to continue to work on all of them when I became independent. Even to this day I am interested in numerous topics and find it difficult to link our various research projects in some coherent manner.
 
The irony is that when I was a postdoc, we in fact used to have monthly joint lab meetings with Bruce Spiegelman’s group. While I was in awe of the quality of the research in Bruce’s lab, I did not recognise that one important reason why his lab achieved this quality was in fact the focus that his lab members maintained.
 
Bruce Spiegelman is himself an impersonation of the principles highlighted in Lawrence Kazak’s quote above. He has become associated with ground-breaking discoveries in one specific research area, in his case in brown adipose tissue biology. In fact, most scientists would consider him as the pioneer and leader of brown fat biology. To be as successful as Bruce Spiegelman probably requires not only a focussed approach to choosing research questions, but also to focus one’s life on research and make research the priority of one’s life.
 
Benjamin Weaver, another researcher featured in the series of articles, wrote: “If I could offer any comment to leadership and to the public at large on the importance of stable research funding, I would emphasize that science is not a job, it is a calling. We don’t stop working at 40 hours a week. We put our entire lives into what we do every single day.”
 
That said, it is also important to recognise that there are various avenues to success, and that there are many successful researchers who do not dedicate their entire lives to making ground-breaking discoveries. However, what is certainly true is that truly successful scientists do not look at research as a job, and are willing to make sacrifices to be able to pursue their passion.
 
Benjamin Weaver also emphasizes the need to focus and discusses why focus on one particular problem is so important, especially early in one’s independent career. He described how his own focus led to a significant publication, which in his case turned out to be a game-changer. He also highlights that in order to reap the benefit of a good publication, one has to capitalise on it by publicising the paper through seminars, attending conferences to present one’s work, talking to others and consequently receiving more speaking opportunities.
 
This is reflected in another quote from Lawrence Kazak’s reflection: “In my experience, to obtain grants, the community needs to see that you can take a problem and solve it, and to do this well, you need to focus.”
 
To achieve a publication with significant impact of course requires more than just focus. One aspect that greatly matters, as discussed by several of the authors, is the experimental approach that one chooses. Factors that can help to generate truly novel findings are pursuing unique experimental approaches and combining different areas of research.
 
Beyond achieving a significant publication, maintaining focus on one problem or topic and developing a unique approach is important in order to establish one’s own niche. Min-Dian Li writes:
“Identifying your niche is key to becoming a successful principal investigator within five years of independence. For me, such a niche is not only a spot in which you can survive given your resources and strengths, but also a unique area of expertise that colleagues might draw on to advance their own science and, ultimately, human knowledge. The latter aspect includes your own group’s rigorous findings, together with the service that you can offer the research community.”
 
I believe that the importance of creating one’s own niche as a researcher cannot be stressed high enough. It allows one to become the go-to-person for specific questions, enabling collaborations and speaking invitations to conferences.
 
Min-Dian Li also highlights that identifying a niche is not always a straight forward process, but often requires active searching and being open-minded. It involves talking to others, getting advice and collaborating. Sometimes it also involves serendipity. Finally he points out that “Identifying a niche also requires resources to be concentrated in order to achieve a breakthrough.”
 
Another important point raised by several of the young principal investigators is hiring of group members. Myriam Aouadi writes: “In my opinion, recruiting trainees is the most important and most difficult decision you have to make as a PI [Principal Investigator]. When you start your lab, you do not have the reputation of the more senior PIs, but I had some assets, such as a good start-up package, interesting projects and the ability to be present in the lab and mentor my team.”
 
It is indeed true that hiring lab members as new faculty can be challenging, especially when it comes to finding candidates with more experience and expertise. As a result, when facing pressure to get the lab on track and produce results in order to meet the tenure requirements, it can be tempting to hire researchers as soon as possible.
 
However, Ed Chouchani, who is an experienced and extremely successful Principal Investigator, points out: “When I first started my lab in 2017 much of the advice I had received from mentors and colleagues was to be very careful with my first hires. When building a scientific team for the first time, there is a strong urge to bring as many people as you can into the group as quickly as possible. In the view of my mentors — and I now share this view strongly — this should be tempered by the much more important element of building a team around shared goals, motivations and individuals who make each other better.”
 
He also elaborates on how he hires postdoctoral researchers: “I frequently seek out prospective post-docs with a strong foundation in one area who are looking to apply this expertise in a new realm of biology.”
 
Similarly, another author, Fei Yin, writes: “Recruitment decisions were equally crucial; I learned to pace the scaling up of the laboratory, ensuring that each addition to the team brought significant value without overextending our resources.”
 
Looking back, I must admit that I did not follow much of this advice. It is true that finding researchers with different expertise was difficult, especially given that my funds to hire experienced researchers were rather limited. Nonetheless, I have always tried to be the one who is in control of the work in my laboratory work. Up to this day I feel it necessary for me to know the techniques and experimental approaches used in my lab. If new methods are to be used, I am usually the one to initiate them.
 
This approach has advantages. I am better able to help my lab members, who were almost exclusively students, troubleshoot problems and give them advice. At the same time, it may have stifled my students’ independence. Most importantly, this approach is not well-suited to make ground-breaking discoveries, which in this day and age requires methodologically diverse approaches and interdisciplinary collaboration. One person alone cannot do all the innovation.
 
Another factor important to be successful highlighted by several of the young investigators is personal growth, including learning how to lead and mentor others, how to communicate and how to ask good scientific questions and answer them.
 
These skills are not taught to us directly, but they have to be actively acquired through observing others and picking up qualities that we observe in others. For instance, throughout my training I had many different supervisors and each of them guided with a very different style.
 
Ed Reznik highlights another important source to hone such personal skills, informal mentors. He writes: “Almost none of the individuals above [people from whom he learned] were my formal mentors, and only some were my collaborators, but they nevertheless offered me invaluable glimpses into their personal scientific philosophy.”
 
What matters critically is that we make an active effort to learn to be a better leader, mentor, communicator and scientist, something that I personally should have focussed much earlier in my career.
 
At the same time, it is also critical that our environment is conducive to promote exchange and personal growth. James White writes: “As a postdoc, I would have taken a faculty position based purely on the offer letter and prestige of the university. Now, I know the importance of fit and environment. … What does your lab look like? Who do you collaborate with? Do you have trusted mentors?”
 
There were various other recommendations that resonated with me, such as “that one aspect of independent research is acquiring a certain comfort in one’s own reasoning and decision-making” (Ed Reznik) and the importance of celebrating little wins (Lydia Finley and Leah Gates).
 
There were also a number of quotes that I found very inspiring and true, and I would like to end with the following two:
 
The first quote is by Bilal Sheikh:
“My message to trainees: yes, science can be a difficult road that is full of uncertainties and anxieties. There is no doubt that more support is required to provide better job security and work–life balance to young scientists. The expectations are high and imposter syndrome is real. It does take a lot of sweat and sacrifice to succeed. But science gives just as much back. It provides you with a lifestyle that very few other careers can. There are opportunities to work in different countries. It gives you the freedom to explore your own thoughts and ideas. It allows you to work with incredibly bright minds and great personalities. It provides a platform to give back to society. So, embrace the journey and let your curiosity take you on an adventure.”
 
Finally, Santiago Vernia writes:
“If I had to give advice to people at an early step of their scientific career, I would suggest they ask themselves whether they are really enjoying what they are currently doing. It is quite frequent to think that the challenges during your PhD will be different during your postdoc, and then think that grass will be greener in your own independent lab. In my experience, research in academia is beautiful but comes with challenges at every single step. The question is: do you enjoy trying to overcome these difficulties? Because in the end, this is what you will be doing with most of your time (at least, this has been my experience). If you don’t enjoy it and you are expecting some future reward, be open minded and consider developing your scientific career outside academia. There are so many interesting options nowadays. It is a privilege to work in something that you are passionate about, and life is too short not to do it. This is at least what I tell my sons.”
 
For older posts click HERE