Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This seems reasonable?

Suppose the full result is worth 7 impact points, which is broken up into 5 points for the partial result and 2 points for the fix. The journal has a threshold of 6 points for publication.

Had the authors held the paper until they had the full result, the journal would have published it, but neither part was significant enough.

Scholarship is better off for them not having done so, because someone else might have gotten the fix, but the journal seems to have acted reasonably.



If people thought this way - internalizing this publishing point idea - it would incentivize sitting on your incremental results, fiercely keeping them secret if and until you can prove the whole bigger result by yourself. However long that might take.

If a series of incremental results were as prestigious as holding off to bundle them people would have reason to collaborate and complete each other's work more eagerly. Delaying an almost complete result for a year so that a journal will think it has enough impact point seems straightforwardly net bad, it slows down both progress & collaboration.


> If people thought this way - internalizing this publishing point idea - it would incentivize sitting on your incremental results, fiercely keeping them secret if and until you can prove the whole bigger result by yourself. However long that might take.

This is exactly what people think, and exactly what happens, especially in winner-takes-all situations. You end up with an interesting tension between how long you can wait to build your story, and how long until someone else publishes the same findings and takes all the credit.

A classic example in physics involves the discovery of the J/ψ particle [0]. Samuel Ting's group at MIT discovered it first (chronologically) but Ting decided he needed time to flesh out the findings, and so sat on the discovery and kept it quiet. Meanwhile, Burton Richter's group at Stanford also happened upon the discovery, but they were less inclined to be quiet. Ting found out, and (in a spirit of collaboration) both groups submitted their papers for publication at the same time, and were published in the same issue of Physical Review Letters.

They both won the Nobel 2 years later.

0: https://en.wikipedia.org/wiki/J/psi_meson


Wait, how did they both know that they both discovered it, but only after they had both discovered it?


People talk. The field isn't that big.


They got an optimal result in that case, isn't that nice.


The reasonable thing to do here is to discourage all of your collaborators from ever submitting anything to that journal again. Work with your team, submit incremental results to journals who will accept them, and let the picky journal suffer a loss of reputation from not featuring some of the top researchers in the field.


To supply a counter viewpoint here... The opposite is the "least publishable unit" which leads to loads and loads of almost-nothing results flooding the journals and other publication outlets. It would be hard to keep up with all that if there wasn't a reasonable threshold. If anything then I find that threshold too low currently, rather than too high. The "publish or perish" principle also pushes people that way.


That's much less of a problem than the fact that papers are such poor media for sharing knowledge. They are published too slowly to be immediately useful versus just a quick chat, and simultaneously written in too rushed a way to comprehensively educate people on progress in the field.


> versus just a quick chat,

Everybody is free to keep a blog for this kind of informal chat/brainstorming kind of communication. Paper publications should be well-written, structured, thought-through results that make it worthwhile for the reader to spend their time. Anything else belongs to a blog post.


The educational and editorial quality of papers from before 1980 or so beats just about anything published today. That is what publish or perish - impact factor - smallest publishable unit culture did.


Don‘t know much about publishing in maths but in some disciplines it is clearly incentivised to create the biggest possible number of papers out of a single research project, leading automatically to incremental publishing of results. I call it atomic publishing (from Greek atomos - indivisible) since such a paper contains only one result that cannot be split up anymore.


Andrew Wiles spent 6 years working on 1 paper, and then another year working on a minor follow-up.

https://en.m.wikipedia.org/wiki/Wiles%27s_proof_of_Fermat%27...


Or cheese slicer publishing, as you are selling your cheese one slice at a time. The practice is usually frowned upon.


I thought this was called salami slicing in publication.


Science is almost all incremental results. There's far more incentive to get published now than there is to "sit on" an incremental result hoping to add to it to make a bigger splash.


Academic science discovers continuous integration.

In the software world, it's often desired to have a steady stream of small, individually reviewable commits, that each deliver a incremental set of value.

Dropping a 20000 files changed bomb "Complete rewrite of linux kernel audio subsystem" is not seen as prestigious. Repeated, gradual contributions and involvement in the community is.


The big question here is if journal space is a limited resource. Obviously it was at one point.

Supposing it is, you have to trade off publishing these incremental results against publishing someone else’s complete result.

What if it had taken ten papers to get there instead of two? For a sufficiently important problem, sure, but the interesting question is at a problem that’s interesting enough to publish complete but barely.


The limiting factor isn’t journal space, but attention among the audience. (In theory) the journals publishing restrictions help to filter and condense information so the audience is maximally informed given that they will only read a fixed amount


Journal space is not a limited resource. Premium journal space is.

That's because every researcher has a hierarchy of journals that they monitor. Prestigious journals are read by many researchers. So you're essentially competing for access to the limited attention of many researchers.

Conversely, publishing in a premium journal has more value than a regular journal. And the big scientific publishers are therefore in competition to make sure that they own the premium journals. Which they have multiple tricks to ensure.

Interestingly, their tricks only really work in science. That's because in the humanities, it is harder to establish objective opinions about quality. By contrast everyone can agree in science that Nature generally has the best papers. So attempting to raise the price on a prestigious science journal, works. Attempting to raise the price on a prestigious humanities journal, results in its circulation going down. Which makes it less prestigious.


Space isn't a limited resource, but prestige points are deliberatly limited, as a proxy for the publications' competition for attention. We can appreciate the irony, while considering the outcome reasonable - after all, the results weren't kept out of the literature. They just got published with a label that more or less puts them lower in the search ranking for the next mathematician who looks up the topic.


Hyper focusing on a single journal publication is going to lead to absurdities like this. A researcher is judged by the total delta of his improvements, at least by his peers and future humanity. (the sum of all points, not the max).


It is easy to defend any side of the argument by inflating the "pitfalls of other approach" ad absurdum. This is silly. Obviously, balance is the key, as always.

Instead, we should look at which side the, uh, industry currently tends to err. And this is definitely not the "sitting on your incremental results" side. The current motto of academia is to publish more. It doesn't matter if your papers are crap, it doesn't matter if you already have significant results and are working on something big, you have to publish to keep your position. How many crappy papers you release is a KPI of academia.

I mean, I can imagine a world were it would have been a good idea. I think it's a better world, where science journals don't exist. Instead, anybody can put any crap on ~arxiv.org~ Sci-Hub and anybody can leave comments, upvote/downvote stuff, papers have actual links and all other modern social network mechanics up to the point you can have a feed of most interesting new papers tailored specially for you. This is open-source, non-profit, 1/1000 of what universities used to pay for journal subscriptions is used to maintain the servers. Most importantly, because of some nice search screens or whatever the paper's metadata becomes more important than the paper itself, and in the end we are able to assign 10-word simple summary on what the current community consensus on the paper is: if it proves anything, "almost proves" anything, has been 10 times disproved, 20 research teams failed to reproduce to results or 100 people (see names in the popup) tried to read and failed to understand this gibberish. Nothing gets retracted, ever.

Then it would be great. But as things are and all these "highly reputable journals" keep being a plague of society, it is actually kinda nice that somebody encourages you to finish your stuff before publishing.

Now, should have been this paper of Tao been rejected? I don't know, I think not. Especially the second one. But it's somewhat refreshing.


Two submission in medium reputation journal does not have significantly lower prestige than one in high reputation journal.


Gauss did something along these lines and held back mathematical progress by decades.


Gauss had plenty of room for slack, giving people time to catch up on his work..

Every night Gauss went to sleep, mathematics was held back a week.


During his college/grad school days, he was going half nuts, ideas would come to him faster than he could write them down.

Finally one professor saw what was happening, insisted that Gauss take some time off - being German that involved walking in the woods.


These patterns are ultimately detrimental to team/community building, however.

You see it in software as well: As a manager in calibration meetings, I have repeatedly seen how it is harder to convince a committee to promote/give a high rating to someone with a large pile of crucial but individually small projects delivered than someone with a single large project.

This is discouraging to people whose efforts seem to be unrewarded and creates bad incentives for people to hoard work and avoid sharing until one large impact, and it's disastrous when (as in most software teams) those people don't have significant autonomy over which projects they're assigned.


Hello, fellow Metamate ;)


The idea that a small number of reviewers can accurately quantify the importance of a paper as some number of "impact points," and the idea that a journal should rely on this number and an arbitrary cut off point to decide publication, are both unreasonable ideas.

The journal may have acted systematically, but the system is arbitrary and capricious. Thus, the journal did not act reasonably.


> This seems reasonable?

In some sense, but it does feel like the journal is missing the bigger picture somewhat. Say the two papers are A and B, and we have A + B = C. The journal is saying they'll publish C, but not A and B!


How many step papers before a keystone paper seems reasonable to you?

I suspect readers don’t find it as exciting to read partial result papers. Unless there is an open invitation to compete on its completion, which would have a purpose and be fun. If papers are not page turners, then the journal is going to have a hard time keeping subscribers.

On the other hand, publishing a proof of a Millennium Problem as several installments, is probably a fantastic idea. Time to absorb each contributing result. And the suspense!

Then republish the collected papers as a signed special leather limited series edition. Easton, get on this!


Publishing partial results is always an invitation to compete in the completion, unless the completion is dependent on special lab capabilities which need time and money to acquire. There is no need to literally invite anyone.


I meant if the editors found the paper’s problem and progress especially worthy of a competition.


> I suspect readers don’t find it as exciting to read partial result papers. Unless there is an open invitation to compete on its completion, which would have a purpose and be fun. If papers are not page turners, then the journal is going to have a hard time keeping subscribers.

Yeah I agree, a partial result is never going to be as exciting as a full solution to a major problem. Thinking on it a little more, it seems more of a shame the journal wasn't willing to publish the first part as that sounds like it was the bulk of the work towards the end result.

I quite like that he went to publish a less-than-perfect result, rather than sitting on it in the hopes of making the final improvement. That seems in the spirit of collaboration and advancing science, whereas the journal rejecting the paper because it's 98% of the problem rather than the full thing seems a shame.

Having said that I guess as a journal editor you have to make these calls all the time, and Im sure every author pitches their work in the best light ("There's a breakthrough just around the corner...") and Im sure there are plenty of ideas that turn out to be dead ends.


... A and B separately.


I agree this is reasonable from the individual publisher standpoint. I once received feedback from a reviewer that I was "searching for the minimum publishable unit", and in some sense the reviewer was right -- as soon as I thought the result could be published I started working towards the publication. A publisher can reasonably resist these kinds of papers, as you're pointing out.

I think the impact to scholarship in general is less clear. Do you immediately publish once you get a "big enough" result, so that others can build off of it? Or does this needlessly clutter the field with publications? There's probably some optimal balance, but I don't think the right balance is immediately clear.


Why would publishing anything new needlessly clutter the field?

Discovering something is hard, proving it correct is hard, and writing a paper about is hard. Why delay all this?


Playing devils advocate, there isn’t a consensus on what is incremental vs what is derivative. In theory, the latter may not warrant publication because anyone familiar with the state-of-the-art could connect the dots without reading about it in a publication.


Ouch. That would hurt to hear. It's like they're effectively saying, "yeah, obviously you came up with something more significant than this, which you're holding back. No one would be so incapable that this was as far as they could take the result!"


Thankfully the reviewer feedback was of such low quality in general that it had little impact on my feelings, haha. I think that’s unfortunately common. My advisor told me “leave some obvious but unimportant mistakes, so they have something to criticize, they can feel good, and move on”. I honestly think that was good advice.


If this was actually how stuff was measured, it might be defensible. I'm having trouble believing that things are actually done this objectively rather than the rejections being somewhat arbitrary. Do you think that results can really be analyzed and compared in this way? How do you know that it's 5 and 2 and not 6 and 1 or 4 and 3, and how do you determine how many points a full result is worth in total?


But proportionally, wouldn't a solution without an epsilon loss be much better than a solution with epsilon?

I am not sure what's the exact conjecture that the author solved, but if the epsilon difference is between an approximate solution versus an exact solution, and the journal rejected the exact solution because it was "only an epsilon improvement", I might question how reputable that journal really was.


It's demonstrably (there is one demonstration right there) self-defeating and counter-productive, and so by definition not reasonable.

Each individual step along the way merely has some rationale, but rationales come in the full spectrum of quality.


Given the current incentive scheme in place it's locally reasonable, but the current incentives suck. Is the goal to score the most impact points or to advance our understanding of the field?


In my experience, it depends on the scientist. But it’s hard to know what an advance is. Like, people long searched for evidence of æther before giving up and accepting that light doesn’t need a medium to travel in. Perhaps 100 years from now people will laugh at the attention is all you need paper that led to the llm craze. Who knows. That’s why it’s important to give space to science. From my understanding Lorenz worked for 5 years without publishing as a research scientist before writing his atmospheric circulation paper. That paper essentially created the field of chaos. Would he be able to do the same today? Maybe? Or maybe counting papers or impact factors or all these other metrics turned science into a game instead of an intellectual pursuit. Shame we cannot ask Lorenz or Maxwell about their times as a scientist. They are dead.


I don’t think that’s a useful way to think about this, especially when theres so little information provided about this. Reviewing is a capricious process.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: