What you suggest seems plausible, but there is a very good counter example. Overleaf is also managing well by relying on the open-source LaTEX. What drives people to subscribe is not the typesetting itself, but the ecosystem around it (collaborative editing, version management, easy sharing, etc.). You can make money with those and still have the rendering free/open-source. I believe a similar thing is/will be true for Typst as well.
That is a bad counterexample. There is a world of difference between the main devs offering a paid service and some unaffiliated company offering services.
In principle, having a reliable source of funding for typst is great. However, as a journal this would make me hesitant: what if down the road some essential features become subscription-only?
Reminds me a bit of Isaac Asimov‘s novel „I, Robot“ where they rely on positronic brains to do things. In the story, mathematics seems to have caught up and developed a framework to analyse the behavior of an AI system. I wonder if something similar will happen if CS becomes an empirical science, i.e., will we try to infer laws from empirical AI behavior measurements so that we can reason about it more effectively? This would then turn CS into Physics somewhat, but based on an artificial system. Very strange times.
> these AI systems will be flying our airplanes, running our power grids, and possibly even governing entire countries.
I guess we should figure out how to include the three laws of robotics in connectionist models asap…
I can second this, even availability of the code is still a problem. However, I would not say CS results are rarely reproducible, at least from the few experineces I had so far, but I heard of problematic cases from others. I guess it also differs between fields.
I want to note there is hope. Contrary to what the root comment says, some publishers try to endorse reproducible results. See for example the ACM reproducibility initiative [1]. I have participated in this before and believe it is a really good initiative. Reproducing results can be very labor intensive though, loading a review system already struggling under massive floods of papers. And it is also not perfect, most of the time it is only ensured that the author-supplied code produces the presented results, but I still think more such initiatives are healthy. When you really want to ensure the rigor of a presented method, you have to replicate it, i.e., using a different programming language or so, which is really its own research endeavor. And there is also a place to publish such results in CS already [2]! (although I haven‘t tried this one). I imagine this may be especially interesting for PhD students just starting out in a new field, as it gives them the opportunity to learn while satisfying the expectation of producing papers.
This is very interesting! I think an exciting direction would be to arrive at minimal circuits that are to some extent comprehensible by humans. Now, this might not be possible for every system, but certainly the rules of Conway‘s GoL can be expressed in less than 350 logic gates per cell?
This also reminds me of using Hopfield networks to store images. Seems like Hopfield networks are a special case of this where the activation function of each cell is a simple sum, but I’m not sure. Another difference is that Hopfield networks are fully connected, so the neighborhood is the entire world, i.e., they are local in time but not local in space. Maybe someone can clarify this further?
You can actually use this to import pdfs generated with Matplotlib as vector graphics into Impress presentations. This allows you to change, e.g., the color of lines or the legend (or any other part of the plot) right within Impress to better fit your presentation. I found this extremely useful in the past. In Powerpoint, I could not even import an svg, let alone a pdf (although maybe the newest version supports this?).
The only downside is that currently you have to first import the pdf into Draw and then copy the shapes/curves over to Impress. I hope they will add direct import into Impress in the future.
There is also an open-source/free version of this [1], which I use regularly. You can install it, e.g., in Fedora, with the ‚diffpdf’ package. It is no longer maintained but works very well, has a nice GUI with a side-by-side view, drag&drop support, and both text and visual modes.
(I am one of the authors) Generally speaking, the latter. The purpose of DiscoGrad is just to deliver useful gradients. These provide information about the local behavior of the cost function around the currently evaluated point to an optimizer of your choice, e.g., gradient descent. Interestingly, the smoothing and noise can sometimes prevent getting stuck in undesired (shallow) local minima when using gradient descent.
Thanks! You may find DeepProbLog by Manhaeve et al. interesting, which brings together logic programming, probabilistic programming and gradient descent/neural networks. Also, more generally, I believe in the field of program synthesis there is some research on deriving programs with gradient descent. However, as also pointed out in the comment below, gradient descent may not always be the best approach to such problems (e.g., https://arxiv.org/abs/1608.04428).
You are right in that the use-cases are very similar to regular autodiff, with the added benefit that the returned gradient also accounts for the effects of taking alternative branches.
Just to clarify: we do a kind of source-to-source transformation by transparently injecting some API-calls in the right places (e.g., before branching-statements) before compilation. However, the compiled program then returns the program output alongside the gradient.
For the continuous parts, the AD library that comes with DiscoGrad uses operator overloading.
> with the added benefit that the returned gradient also accounts for the effects of taking alternative branches.
Does this mean that you can take the partial derivative in respect to some boolean variable that will be used in an if (for example), but with regular autodiff you can't?
I'm struggling to understand why regular autodiff works even in presence of this limitation. Is it just a crude approximation of the "true" derivative?
https://enzyme.mit.edu