Hey, This is my first article, I hope you find informative. Currently, I get interested in the AI & Machine Learning. And start learning by experimenting as well. I usually use Tensorflow and CNTK in Parallel. So, First I give a little Introduction to both like What they are and What they do?
Tensorflow is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. The flexible architecture allows you to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device with a single API.
If someone of you knows that, Tensorflow was originally developed by researchers and engineers working on the Google Brain Team within Google's Machine Intelligence research organization for the purposes of conducting machine learning and deep neural networks research, but the system is general enough to be applicable in a wide variety of other domains as well.
The Current release version of Tensorflow is r1.4. The APIs are available in several languages(Python, C++, Java, Go) both for constructing and executing a TensorFlow graph. The Python API is at present the most complete and the easiest to use, but other language APIs may be easier to integrate into projects and may offer some performance advantages in graph execution. They also provide the bindings for C#, Haskell, Julia, Ruby, Rust, and Scala.
Microsoft Cognitive Toolkit - CNTK - is a unified deep-learning toolkit by Microsoft.
CNTK can be included as a library in our Python, C#/.NET, or C++ programs, or used as a standalone machine learning tool through its own model description language (BrainScript). In addition, we can use the CNTK model evaluation functionality from our Java program.
The Current release version of CNTK is 2.2.
They both support Python 3.6, Python majorly used by Developers and Professionals.
We have just found Keras.
Keras is a high-level neural networks API, written in Python and capable of running on top of TensorFlow, CNTK, or Theano. It was developed with a focus on enabling fast experimentation. Being able to go from idea to result with the least possible delay is key to doing good research.
Use Keras if you need a deep learning library that: Allows for easy and fast prototyping (through user friendliness, modularity, and extensibility).
- Supports both convolutional networks and recurrent networks, as well as combinations of the two.
- Runs seamlessly on CPU and GPU.
- Read the documentation at Keras.io.
- Keras is compatible with: Python 2.7-3.6
Why this name, Keras?
Keras (κέρας) means horn in Greek. It is a reference to a literary image from ancient Greek and Latin literature, first found in the Odyssey, where dream spirits (Oneiroi, singular Oneiros) are divided between those who deceive men with false visions, who arrive at Earth through a gate of ivory and those who announce a future that will come to pass, who arrive through a gate of horn.
Keras was initially developed as part of the research effort of project ONEIROS (Open-ended Neuro-Electronic Intelligent Robot Operating System).
Getting started: 30 seconds to Keras
The core data structure of Keras is a model, a way to organize layers. The simplest type of model is the Sequential model, a linear stack of layers. For more complex architectures, you should use the Keras functional API, which allows to build arbitrary graphs of layers.
Here is the Sequential model:
from keras.models import Sequential
model = Sequential()
Stacking layers is as easy as .add() :
from keras.layers import Dense, Activation
from keras.layers import Dense, Activation
Once your model looks good, configure its learning process with .compile() :
If you need to, you can further configure your optimizer. A core principle of Keras is to make things reasonably simple while allowing the user to be fully in control when they need to (the ultimate control being the easy extensibility of the source code).
optimizer=keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True))
You can now iterate on your training data in batches:
# x_train and y_train are Numpy arrays --just like in the Scikit-Learn API.
model.fit(x_train, y_train, epochs=5, batch_size=32)
Alternatively, you can feed batches to your model manually:
Evaluate your performance in one line:
loss_and_metrics = model.evaluate(x_test, y_test, batch_size=128)
Or generate predictions on new data:
classes = model.predict(x_test, batch_size=128)
Building a question answering system, an image classification model, a Neural Turing Machine, or any other model is just as fast. The ideas behind deep learning are simple, so why should their implementation be painful?
For a more in-depth tutorial about Keras, you can check out:
I think this may help you to get Started. And I wish you all Goodluck, to start your journey in Machine Learning with Tensorflow, CNTK etc.