- Hands-On Neural Networks
- Leonardo De Marchi Laura Mitchell
- 316字
- 2025-04-04 14:15:16
Anaconda
One of the main drawbacks encountered while using Python for data science is the amount of libraries that are necessary to install. Also, when provisioning instances for deploying your models, you will need to install all the necessary libraries to run your program, which might be problematic if you deploy to different platforms and operating systems.
Luckily, there are few alternatives to venv. One of them is Anaconda: a free and open source Python distribution for data science and machine learning. It aims to simplify package management and deployment. Anaconda's package manager is called conda and it installs, runs, and updates packages and their dependencies.
It's possible to have a smaller version of conda with a subset of the main libraries, which is called miniconda. This version is quite handy when only the main libraries are needed as it reduces the size of the distribution and the time to install it.
To create an environment in an automated way, it's necessary to create a list of dependencies, as we did with venv. Conda is compatible with the pip format, but it also supports the more expressive YAML format.
Let's see how this is done by performing the following steps:
- For example, let's create a dl.yaml file with the following content:
name: dl_env # default is root
channels:
- conda
dependencies: # everything under this, installed by conda
- numpy
- scipy
- pandas
- Tensorflow
- matplotlib
- keras
- pip: # everything under this, installed by pip
- gym
- Place dl.yaml in a directory of your choice, and from the terminal in the same location of the file, enter the following command:
conda env create python=3.7 --file dl_env.yaml
- It's also necessary to activate what. It's possible to do this at any point after installation by typing activate:
conda activate dl_env
Now all the Python calls will be directed to the Python's in the conda environment that we created.