It's also always error-prone. Nothing in the field is perfect. Reality is a bad approximation for your model at times, if you take a model centric view.
I would be immensely skeptical that field work is ever going away. There may be aspects of truth in this around cost of travel, risk, seniority.
Exploration geophysics paid for me to travel to and across more than half he countries on the planet, calibrating old maps, datums, projections against the 'new' WGS84, scaling peaks to stage base stations, getting familiar with the ins and outs of tides, magnetic fields, gravity, radiometric backgrounds, finding a good band in Mali ...
Loved it.
Mawson had the field trip of a lifetime (for his two mates, it was the end of their lifetime!) and it didn't end his bug for the outside. I don't think he was made to sit in a lab.
I'd say your Mali trip was the same: it hasn't made you want to stop being outside from the sound of it.
I've "retired" to argriculture tech and labour support for W.Australian family grain production. We've almost finished harvest and I've been doing a lot of scrolling and posting here while hanging about near idle "on call" fire tenders (we had a hundred fires, mostly from lightening strikes, in a single week just recently)
* https://www.watoday.com.au/national/western-australia/wa-bus...
* https://www.youtube.com/watch?v=yulvSvtFVqc
^ Further south than I'm based, and a header fire, not a strike. Okay when caught early - life and town threatening if not.
Oh, yeah: Songhoy Blues: https://www.youtube.com/watch?v=BOValSt7YOY
The Mali trip was notable for random types firing weapons at our aircraft while we were running lines with 80m ground clearance - we had to armour the cockpit bellies and stuff the fuel tanks with mesh.
Datums can get dull fast but there's adventure inherent in surveying. You should write a book, or at least a chapter or two. "Nadir Point" has a nice ring to it...
Mine Camps .. with a Long S ( https://en.wikipedia.org/wiki/Long_s ) once appealed, but I fear getting cancelled.
There was always something happening, whether it was shipboard fires in the disputed parts of South China Sea or India / Pakistan engaging in cross border nuclear tests in our survey zone.
That last one followed several of us about for years, anytime we crossed a US controlled border they got interested in how we knew what they didn't ...
* https://www.nytimes.com/1998/05/13/world/nuclear-anxiety-the...
.. look, we just happened to be there with a 42 litre doped Sodium Iodide crystal pack and 256 channel gamma ray spectrometer just as the tests kicked off ...
I've done a fair bit in the field, but a huge part of my career has been mining old datasets and reinterpreting things in light of new data/etc.
What the article is describing isn't new in any way. But it also doesn't remove the need for fieldwork or the need for the experience of having done fieldwork to use existing datasets. Observational sciences (e.g. geology, biology, etc) where you can't easily replicate the environment you are studying in the lab are always going to hinge on some sort of fieldwork.
Finding creative ways to use existing data doesn't change that.
Of course, in-person exchanges still happen, but there's something of a default to do most things remotely because it's more efficient (and honestly, easier for all parties involved). The result is that you don't get to see cool or unusual machines/setups that often, and some flair of doing research is lost.
I can imagine that that's especially painful for new ecologists, because fieldwork is also a way to experience things that you otherwise wouldn't. Hopefully, we can bring some of it back with edge devices and models.
It’s been sad seeing journalism in the online era, where so much (not all!) content is produced without really visiting or researching things. Often it’s based only on statements / tweets, sometimes more seeping based on phone calls, sometimes reading a book on the topic, but rarely do journalists seem to show up anywhere.
When reading something like Didion’s piece on the LA highway central command, it shows how irreplaceable lived experience is.
I’m not exactly sure if we share a similar experience, but living on a trail in the Santa Cruz mountains affords me the opportunity to hike the same trails every weekend, year round (or even daily).
I’m not taking measurements, but it’s incredible to witness the effect of seasons on familiar territory just a few miles outside town. The weather changes, the wildlife changes and the air changes (moist to dry and back).
It’s an incredibly special experience to revisit the same place time and time again and witness the impact of … time. I hope you found something else to replace your familiar seaside.
After all, if you want a different answer, you can easily tweak a model.
Much harder to force a bunch of people in the field to not see what they see. Certainly not impossible, however!
Why be sceptical? The model will answer consistently, inconvenient truths won't get in the way.
The article should perhaps introspect a bit more instead of setting up a false dichotomy between "rainforest field work or computers".
I would say the main workflow is collect some new data nobody has collect before, look at it and see if it shows anything interesting, make up some interesting publishable interpretation.
It feels like it'd be smarter to start with working with existing data and publish that way. If you hit on some specific missing piece, go collect that data, and work from there. But the incentive structures aren't aligned with this
The AI angle is really shoehorned in, but irrelevant to the larger problem. Sure, it allows you to annotate more data. Obviously it's more fun to go do field work than count pollen grains under a microscope. If anything AI make it easier to do more fieldwork and collect even more data b/c now you can in-theory crunch it faster
Perhaps creating secure private clouds for scientists, away from AI scrapers etc that scientists can access, with associated counter-surveillance, is the way forward.
I'm a GIS guy working on cloud native tech, but with a focus on privacy. I have a local-first Mac native product nearing beta. I'm thinking a lot about what data sharing options can be at the moment.
Some people scrape charts in publications to extract data. This has been done for a while. Maybe AI could automate this step. Thatd be useful
From a pure business perspective, AI is largely about copyright circumvention. The laws are lagging and people are making serious money from data theft.
I don't see how copyright enters into it. I doubt that "oh hey I published this very valuable and proprietary dataset online but it's copyright me so pretty please don't use it to make money" was ever going to get you anywhere to begin with.
https://news.bloomberglaw.com/us-law-week/big-tech-wins-in-c...
If the other party can prove you broke into their system then you've got a problem.
If you redistribute something that's copyrighted you've got a problem.
Merely possessing something isn't a problem in and of itself ... probably? Unless the other party can demonstrate damages at least.
I guess there may be a broader and less public-oriented set of funders in geology- and maybe there aren’t as many standardized data types as there are in the world of biology.
what you need is people uploading data in consistent well documented formats. There are all sorts or projects that do this, but there is a strong incentive to not upload things, or sort of half upload it.. but in a way where anyone using it is going to have to reach out to you. Not suggesting bad intentions, Maybe youre still working with the data and expect to publish more and dont want someone swooping in and beating you to the punch. Typically journals require data availability, but its kind of informal and adhoc
Attempting to convince people to change course and focus on restoration has mostly been a losing battle, with much larger forces behind the main detriments that make local changes feel inadequate.
> Scientists who run long-term ecological studies, in particular, report that they struggle to find funding.
It's cheaper and easier to do stuff sitting at a desk. In theory that's a good thing if it means more work gets done, but field work has to happen too. For many people it's the best part of the job, for others it's a pain that has to be suffered through to get the data they need. Hopefully there's room (and funding) for both kinds of people to do the work they want.
There's also a strong belief in "statistical magic." Faced with a bad or insufficient data set, someone will say: "Let's give the data to <statistician> and have them work their magic on it."
That the results actually have to be influenced by the data in some way is something that has to be explained to people. In all of my years as a scientist, I've learned that there's still no substitute for good measurements. Good data can be cheaper than analysis of bad data.
'don't waste clean thought on a dirty enzyme'
The specific problem he was facing was looking a DNA polymerases - and the problem with enzymes is they are catalytic - so a 1% impurity at the protein level might account for 100% of the activity you are measuring and wrongly ascribing to the 99% of purified protein.
So much of science is trying not to fool yourself.
It’s so important that we write these down, so when these people have forgotten why they’re not making any progress and they’re searching for answers, they’ll find what we wrote down and say “ohhh, we had too much hubris thought we were smarter than everyone else and didn’t listen to how important actually going outside is.”
The 'computer scientist' quote illustrates a frustrating trend: tech-centric 'drive-bys' that lack the ecological context required for good science. On the flip side, the 'old guard' who ignore modern data assimilation are leaving massive potential on the table. The field is rightfully shifting from site-specific anecdotes to foundational, broad-scale work, but we need both skillsets to do it justice.
An example is the National Science Foundation NEON project, which is a long-term ecological monitoring initiative with common field methodologies across 81 North American sites. https://www.neonscience.org/
You should be able to publish data as a paper and get academic credit for doing that. Then others can publish analyses of that data, crediting you.
I will add that funding can complicate things a bit, funding sources often get wowed by more "advanced" methods, while the underlying science might be less than stellar. There are important questions that can be answered by small, elegant field studies, and there are questions that require larger datasets and more computation. When we start putting the methodological cart before the scientific horse, that's where we run into problems.
There is still field work that happens. AI can not replace that. You'd need to literally simulate the whole world before AI can even get close to gather all data obtainable here. For instance, on a given ant hill: which plant species will be more prevalent there? (For those not knowing a lot about ants: some ant species carry specific plants or defend plants against invaders. The most usual example is for leaf-cutting ants, but there are many additional examples, and for various reasons you will also find different plant species to be more prevalent close to an ant hill in a forest area, than other plant species.) AI can steal existing data, but there is no way it can gather real data UNLESS you are able to monitor this. This is possible via machines, e. g. drones, but AI does not understand what it is doing and even with instructions you still may be able to just hallucinate data. So perhaps one day this may all be fully automated (sensor systems can do all humans can do too, of course), but right now this is simply not the case. And this is just one example for many more.
For example often if you have a blackfly infestation, the first sign is often a steady stream of ants - they are feeding of the honeydew created by the blackfly, they protect the blackfly from predators like ladybird larvae, and they will even transfer the blackfly to new plants.
Once I knew this I found the best way to tackle blackfly was not to go after the blackfly, but distract the ants - a bit of jam works a treat.
Though you could then argue the ant's have then moved on, from farming, to a protection racket.
It's a bit like the distinction between being able to swim around, like a fish, versus just having offspring who end up a bit closer to some goal, like a coral. Both end up in a place where they can live, but only one got there by being the lucky survivor against a background of alternatives that were born elsewhere.
It's made clear in the title of one of their papers: "Plasticity and not adaptation is the primary source of temperature-mediated variation in flowering phenology in North America" (https://pubmed.ncbi.nlm.nih.gov/38212525/)
Granted, my teaching for what drives evolution is colored by the idea of the peppered moth, which clearly had everything necessary for both colors. The population change on dominant color, though, was driven by environmental changes.
Even that title you give seems to preclude that increased plasticity can itself be an adaptation? Wouldn't we need to show that there was no change in plasticity during this time in order to show otherwise?
You could also make bad assumptions related to rates. One might assume that adaptation will occur within some time window when actually they're extrapolating based on plasticity.
Sure, plasticity and its limits are themselves the result of natural selection, but there are still experiments you can do to distinguish the two (e.g. does the change show up in the genome versus the transcriptome or the proteome?). I guess I'm just uncertain about how useful it is to split hairs beyond a certain point. Like, you could say that everything is caused by the big bang, case closed, but that's not going to affect any subsequent decisions that people might make.
As an easy example, to your point, would you say that adaptation and evolution lead to land creatures? There are obvious limits to how quickly and how well that could have possibly worked. But it also feels safe to say that some organisms almost certainly had a level of plasticity that made it more likely for them to survive the process?
If its both plasticity and adaptation at play in all cases then there's not much sense drawing a line, such as the evolution of land creatures. And its also probably silly to worry about adaptation without plasticity. But it does make sense in the plasticity without adaptation case.
For instance there are people born in Tibet with adaptations for altitude, and people born in the Andes with adaptations for lead tolerance. And if those of us without those adaptations were to try to tolerate such heights, and such concentrations of lead, we wouldn't do as well because we lack the genes for that particular adaptation. Even though we are partially adapted to those things in general.
The "X is not a result of adaptation" predicate makes sense in relation to other things that are more directly a result of adaptation.
To the peppered moth example, it isn't like there was a specific mutation that caused them to change color. Both colors always existed. The prevalence at the population level was essentially the result of external drivers to the species.
So, in plants, it is fair to say that those with more plasticity are able to change when they flower in response to environmental changes. Why is it not fair to expect that those most able of doing that are the ones you are likely to see further adaptations on to keep adjusting?
Yes, we can see individual mechanisms and we can focus on them. Such that we don't have to hope for random mutations that further advantage things. But, adaptation and evolution are not defined as only random mutations. (Or have they been somewhat redefined into that?)
I think more cleanly stated for my point, I am not arguing against the idea of plasticity. I am arguing that adaptation is not defined as only random mutations introducing novel characteristics. Specifically because the fundamental example for evolution that used to be taught was definitively not a novel characteristic of the species.
> Why is it not fair to expect that those most able of doing that are the ones you are likely to see further adaptations on to keep adjusting
Its a reasonable hypothesis, what kind of experiment would confirm it? Would you be comparing between species or between offspring of distinct individuals?
I think you'd need to focus on sexual reproduction for both your experimental and control variables, since you're looking to reason about the rate of adaptation.
I'm going to read the guy's paper and report back, but from the title I'm thinking that he's finding that asexually propagated plants show just as much flowering-time change in the presence of temperature changes as those whose parents were pollinated and concluding that this code is quite flexible, but that any rewrites among the sexually propagated set are not related to flowering time change. That would be my approach anyway.
So my point is just that those are different results. It would be interesting (though perhaps a bit Lamarckian) if evolution favored the more plastic when granting adaptations towards still greater plasticity. But that's a totally separate conclusion from one that studies a phenomenon which changes even when genes do not.
From there, it is the idea that the plasticity being looked at here is not something you would be able to identify at the genetic level, as they all have that genetic code, as it were.
I think that makes a bit more sense, and I can sympathize with the shortening of "not genetic changes" to "not adaptation" for a title.
For the experiments I would need to know if something was adaptation or not, my point is that I didn't think adaptation was something that had a driving mechanism in the genetics. Would be like looking for adaptation reasons on why some blood types are more common than others. At large, the answer is just that any blood type that confers disadvantage is expected to go away. Any that confer advantage would be expected to get prioritized. And this advantage will be as influenced by the environment as it is by anything else.
Again, it all goes back to my understanding of the moths. In a world where the color of the moth doesn't matter, you'd expect a rough mix of colors. In the world where blending in with soot was important, the dominant color became black. When that went away, the dominant color flipped back. At all times, it was basic genetics that determined what color moth reproduction made. It was major changes in the environment that determined which one had more representation in the populations genetics.
90% of the time it is spend analyzing data or writing up proposals/grants/papers. i don't think AI was the turning point.
Someone is jumping the shark.
I always felt like one of the primary motivations to pursue science was being able to bail out of the office for the entire summer for "field work"...
that's the original title before editorializarion
Instead of counting bears in the forest by hand, you set up a hundred trail cameras and then use computers to count bears 24/7 across an entire area. This is field research, on a scale that was previously impossible.
One of the issues that I think affects these people is that the scientific process can not guarantee that something is "correct" or "incorrect". Something that at a point was known to be correct can be later disproved by a later experiment or more specific conditions.
Some people want/need simple certainties, and as soon as they stumble upon something different, they will shift their trust to something "simple and clear". And they can do that again and again, as long as they don't need to accept some things are complex or unknown.
I do not know any school system (not that I am an expert or searched for it, just an impression) that emphasizes this dynamic nature of understanding, or that tries to make people accept the unknown.
(Been reading some of Wojcieck Zurek's semi-pop articles on decohrerence, and he alway puts a paragraph about the evolution of language in and points out that our intuitions won't work well in the quantum domain).