TensorFlow High-Level APIs: Models in a Box (TensorFlow Dev Summit 2017)
176 8 26548
TensorFlow allows you to define models using both low, as well as high-level abstractions. In this talk, Martin Wicke introduces Layers, Estimators, and Canned Estimators for defining models, and shows the roadmap for their availability in core TensorFlow.
Visit the TensorFlow website for all session recordings: https://goo.gl/bsYmza
Subscribe to the Google Developers channel at http://goo.gl/mQyv5L
By anonymous 2017-09-20
This portion of the Github tree is under active development. I expect this warning message to go away once the Estimator class is moved into
tf.core which is schedule for version r1.1. I found the 2017 TensorFlow Dev Summit video by Martin Wicke to be very informative on the future plans of high level TensorFlow.
By anonymous 2017-09-20
TF is not written in python. It is written in C++ (and uses high-performant numerical libraries and CUDA code) and you can check this by looking at their github. So the core is written not in python but TF provide an interface to many other languages (python, C++, Java, Go)
If you come from a data analysis world, you can think about it like numpy (not written in python, but provides an interface to Python) or if you are a web-developer - think about it as a database (PostgreSQL, MySQL, which can be invoked from Java, Python, PHP)
Python frontend (the language in which people write models in TF) is the most popular due to many reasons. In my opinion the main reason is historical: majority of ML users already use it (another popular choice is R) so if you will not provide an interface to python, your library is most probably doomed to obscurity.
But being written in python does not mean that your model is executed in python. On the contrary, if you written your model in the right way Python is never executed during the evaluation of the TF graph (except of tf.py_func(), which exists for debugging and should be avoided in real model exactly because it is executed on Python's side).
This is different from for example numpy. For example if you do
np.linalg.eig(np.matmul(A, np.transpose(A)) (which is
eig(AA')), the operation will compute transpose in some fast language (C++ or fortran), return it to python, take it from python together with A, and compute a multiplication in some fast language and return it to python, then compute eigenvalues and return it to python. So nonetheless expensive operations like matmul and eig are calculated efficiently, you still lose time by moving the results to python back and force. TF does not do it, once you defined the graph your tensors flow not in python but in C++/CUDA/something else.
Popular Videos 67
Submit Your Video