Highest Rated Comments


ViralBShah114 karma

There are several answers:

  1. We have a huge number of learning resources on the julialang.org website - tutorials, videos, books, and the like.
  2. David Sanders had an excellent workshop at JuliaCon 2020 just a couple days ago on Learning Julia via epidemic modeling that has already become very popular.
  3. JuliaAcademy has a number of courses all available for free.
  4. Julia Computing also offers commercial trainings.

ViralBShah107 karma

Industry adoption is very much on the rise. We'll be presenting the findings in our annual survey this week. The community is roughly half a million users. In a survey of 2,500 people (naturally self-selected), 40% identified as professional - which is a fairly large number of industry users.

This has largely grown over the years, largely due to us addressing what the community wants - performance (as always), IDEs, profiling and debugging tools, PackageCompiler, time to first plot etc. In my opinion, further improvements in compile times, and building binaries from Julia programs will lead to an explosion in the user base.

Many large companies in finance and pharma have substantial Julia codebases. Many startups use Julia to build their products and services - Pumas AI , Relational AI, Beacon Biosignals, Invenia. These are just a few I can name, and there are several more.

ViralBShah56 karma

Julia is an open source project, so by itself, except for community resources, it does not have to worry about such things. Usually, we make a little profit from JuliaCon, which funds the community infrastructure like CI, downloads, etc.

Julia Computing, on the other hand, employs many Julia contributors and does need to break even. We are fairly conservative as a business and have grown on the strength of customer contracts, government grants, and foundation funds. Investment by General Catalyst and Founder Collective made it possible for us to start thinking more in terms of a scalable product oriented business. Now that 1.0 is out, we are focussing on online training and JuliaTeam (to make it easy for corporate users to install, upgrade, and govern Julia+packages installations) to build a scalable product business.

ViralBShah26 karma

Realistically, I don't think that scientific programmers are going to use that many parentheses. Sorry, I felt like someone had to say it out loud. :-)

However, on a more serious note, where Julia succeeds in this space is that it picks up many good ideas from lisp, but offers a combination of a set of language features that makes it particularly well-suited for its domain.

ViralBShah26 karma

The community is quite focussed on having all of Julia be naturally useful for AI. The focus is not so much to create yet another framework, but to make sure that the whole of Julia is available for machine learning. This pretty much boils down to two things: having great support for Automatic Differentiation (AD), and having good support for native code generation on hardware accelerators (Mostly GPUs, but also increasingly TPUs and various new things in the pipeline).

At JuliaCon, Jarrett Revels announced Cassette.jl (https://github.com/jrevels/Cassette.jl) and Capstan.jl (the AD package that leverages Cassette's compiler enhancements). With these packages, we now have a general way to do AD on entire Julia programs. The CUDANative.jl and related GPU packages give us a general way to run Julia on GPUs, and the underlying refactoring makes it easy to target TPUs and other special purpose processors. With all these projects stabilizing for 1.0, we feel that Julia is already a compelling language for AI researchers and users.

In the meanwhile, we have Flux.jl (https://github.com/FluxML/Flux.jl) by Mike Innes, and Knet.jl (https://github.com/denizyuret/Knet.jl) by Deniz Yuret, both of which provide significant AI capabilities. The goal is to have Framework-less AI. Just write straight Julia code, and we'll be able to differentiate it, stick it into an optimizer, run it on a GPU - without needing new programming models (like writing the computational graph) or frameworks that re-implement all the libraries.