23
Aug

Python SQLAlchemy

Python SQLAlchemy and Object-Relational Mapping

Whenever it comes to programming a web service, you will require a solid database backend. In the past, programmers used to write raw SQL statements, pass them to the database engine and parse the returned results as a normal array of records. Nowadays, programmers can write Object-Relational Mapping (ORM) programs to remove the necessity of writing tedious and error-prone raw SQL statements.

 

WHAT IS ORM?

Most programming language platforms are object oriented. ORM is a programming technique for converting data between incompatible type systems in object-oriented programming languages. Data in RDBMS (Relational Database Management System) servers on the other hand is stored as tables. Object relation mapping is a technique of mapping object parameters to the underlying RDBMS table structure. An ORM API provides methods to perform create, read, update, delete (CRUD) operations without having to write raw SQL statements. So basically, an ORM takes care of these issues while you can focus on programming the logics of the system.

 

Installation:

In order to run the python script with SQLAlchemy codes in it, you need to install flask-sqlalchemy extension first for that run the command :

 

Let’s Get To Know SQLAlchemy

 

For using functions from SQLAlchemy in your program, you need to import it first using the statement :

Now you can create your Flask application and make surem you set the URI for the database which is to be used in your program.

Now, you need to create an object of SQLAlchemy class with the application object provided as parameter. The object provides helper functions for ORM operations and also provides a Model class using which user-defined models for database is declared. In the following code snippet a persons model is created:

To create or use the database that we specified in the URI run the create_all() method.

To add an object data to the database we can use the following code:

For deleting just replace the add function with delete function and if you need to retrieve the records of the table use:

You can also apply filters while retrieving records. For eg:

will return the set of table rows whose city attribute is equal to ‘Mumbai’.

 

SAMPLE PROGRAM

 

Now, let’s see an entire program based on what we’ve learned so far:

First let’s setup an html page to display our database contents.

 

We’ve set an HTML page with a heading SQLAlchemy Test and it has a table with two columns to display a persons name and phone number. Now we need to code a python script to do our job.

In the above python script, we create a model class for person which can hold values for a person’s name and phone number. We set the route of out app to the homepage of localhost. So after running the app, when you open the browser and visit your localhost homepage by typing 127.0.0.1:5000 in your browser’s address bar, you can see the html page we’ve setup. It will show all function finally returns the rendered template of show_all.html populated with the contents in the database.

Now, we can checkout how we add contents to the database. For that we need to create objects of our model class and provide values for its attributes i.e. name and phone of a person. Now we can add it to our database using SQLAlchemy_object.session.add(model object) function. After adding all the values make sure you commit the changes to the database in order to prevent data corruption. Now, we have our database ready with contents, we can display it on our html page using render template(‘html_file_name’, model = model.query.all()) function.

Save the python file and place the html file in a folder named templates.  If no such folder exists create one. Now you can run the python script and visit 127.0.0.1:5000 in your browser to see your app running.

 

Benefits of SQLAlchemy:

According to http://pajhome.org.uk/blog/10_reasons_to_love_sqlalchemy.html here are the top 10 reasons to love SQLAlchemy

  • Let’s you define the database schema in your code
  • Automatically synchronise the model and schema
  • Easy to read
  • Simple queries.
  • Seamless integration with web frameworks
  • Fast loading, Better performance
  • Transparent Polymorphism
  • Works with legacy frameworks
  • Easy to customise the library
  • Great Documentation

 

Challenges of SQLAlchemy.

  • The concept of unit of work is not very known among the developer community
  • A heavy-weight API

 

 

21
Aug

Python Scrapy Library

What is Scrapy??

 

Scrapy is an open source and collaborative framework for extracting the data you need from websites in a fast, simple, yet extensible way.

 

But what do you mean be scraping data?

 

Web scraping is a computer software technique of extracting information from websites. This technique mostly focuses on the transformation of unstructured data (HTML format) on the web into structured data (database or spreadsheet).

In python, web scraping can be done using scrapy.

 

Let’s get started.

Installation first.

You can easily install using pip. For other installation option, click here. Type the following to your command prompt.

Now, let’s get our hand on some coding.

Let’s start off by creating a scrapy project. Enter the directory of your choice and type in the following.

Something like this prints out for you.

You can start your first spider with:

This will create a directory tutorial with the following contents.

tutorial/

scrapy.cfg            # deploy configuration file

 

tutorial/             # project’s Python module, you’ll import your code from here

__init__.py

 

items.py          # project items definition file

 

pipelines.py      # project pipelines file

 

settings.py       # project settings file

 

spiders/          # a directory where you’ll later put your spiders

__init__.py

 

Now let’s create a spider, but what are spiders?

Spiders are classes that you define and that Scrapy uses to scrape information from a website (or a group of websites). They must have a  subclass scrapy. Spider and define the initial requests to make, optionally how to follow links in the pages, and how to parse the downloaded page content to extract data.

We will be using examples from the official doc.

So save the following code in a file named quotes_spider.py under the tutorial/spiders directory in your project:

As you can see, our Spider subclasses scrapy.Spider

Let’s see wha teach of the attributes and methods mean.

  • name: identifies the Spider. It must be unique within a project, that is, you can’t set the same name for different Spiders.
  • start_requests(): must return an iterable of Requests (you can return a list of requests or write a generator function) which the Spider will begin to crawl from. Subsequent requests will be generated successively from these initial requests.
  • parse(): a method that will be called to handle the response downloaded for each of the requests made. The response parameter is an instance of TextResponse that holds the page content and has further helpful methods to handle it.

The parse() method usually parses the response, extracting the scraped data as dicts and also finding new URLs to follow and creating new requests (Request) from them.

 

Now let’s run our spider.

Go to the top level directory and type in the following in your cmd.

This command runs the spider with name quotes that we’ve just added, that will send some requests for the quotes.toscrape.com domain. You will get an output similar to this:

… (omitted for brevity)

 

Source:https://doc.scrapy.org/en/latest/intro/tutorial.html

Note:

Two new files have been created in the directory you were at. quotes-1.html and quotes-2.html, with the content for the respective URLs, as our parse method instructs.

Beautiful! Isn’t it?

Benefits of Scrapy:

  • Scrapy is a full framework for web crawling which has the tools to manage every stage of a web crawl,
  • Comparing with Beautiful Soup, you need to provide a specific url, and Beautiful Soup will help you get the data from that page. You can give Scrapy a start url, and it will go on, crawling and extracting data, without having to explicitly give it every single URL.
  • It can crawl the contents of your webpage prior to extracting.”

 

Challenges of Scrapy:

  • To parse just a few webpages, Scrapy is an overkill. Beautiful soup is better.

 

To learn to play with scrapy, check out

https://doc.scrapy.org/en/latest/intro/tutorial.html

www.tutorialspoint.com/scrapy/

 

19
Aug

Python Requests Library

What is Requests?

Requests is an elegant and simple Apache2 licensed  HTTP library for Python. It is designed to be used by humans to interact with the language. This means you don’t have to manually add query strings to URLs, or form-encode your POST data.

Requests will allow you to send HTTP/1.1 requests using Python. With it, you can add content like headers, form data, multipart files, and parameters via simple Python libraries. It also allows you to access the response data of Python in the same way.

 

Installation:

 

Here, I am installing using pip. For more options, visit.

 

Let’s get started.

First lets import requests

Here, i am using a webpage within my server which open to a page like this.

 

Let’s create a Response Object r

Let’s check the current status of the site

 

Let’s get the content of the site

 

To know what encoding it uses,

Let’s change that to say,

 

You might want to do this in any situation where you can apply special logic to work out what the encoding of the content will be. For example, HTTP and XML have the ability to specify their encoding in their body. In situations like this, you should use r.content to find the encoding, and then set r.encoding. This will let you use r.text with the correct encoding.

Requests will also use custom encoding in the event. If you have created your own encoding and registered it with the codecs module, you can simply use the codec name as the value of r.encoding and Requests will handle the decoding for you.

 

There’s also a builtin JSON decoder, in case you’re dealing with JSON data:

 

With every request you send to a HTTP server, the server will send you some additional data. You can get extract data from an HTTP response using:

 

Benefits of Requests.

This is the list of features from the requests site:

  • International Domains and URLs
  • Keep-Alive & Connection Pooling
  • Sessions with Cookie Persistence
  • Browser-style SSL Verification
  • Basic/Digest Authentication
  • Elegant Key/Value Cookies
  • Automatic Decompression
  • Unicode Response Bodies
  • Multipart File Uploads
  • Connection Timeouts
  • .netrc support
  • List item
  • Python 2.6—3.4
  • Thread-safe.

An awesome lotta features!Right?

 

 

 

To know more,

http://docs.python-requests.org/en/master/

http://docs.python-requests.org/en/latest/api/

http://pypi.python.org/pypi/requests

http://docs.python-requests.org/en/latest/user/quickstart/

http://isbullsh.it/2012/06/Rest-api-in-python/#requests

http://docs.python-requests.org/en/master/user/quickstart/

 

17
Aug

Python Django

What is Django?

Django is a high-level Python Web framework that encourages rapid development and clean, pragmatic design. When you’re building a website, you always need a similar set of components: a way to handle user authentication (signing up, signing in, signing out), a management panel for your website, forms, a way to upload files, etc. Python Django takes care of the repetitive work for you so that you don’t have to reinvent the wheel all over again.

 

Lets install Django

 

You can install Django using

For other ways of installation, visit this.

 

Let’s get started

Since this is the first time we are using Django there is some  initial setup to do. We need to auto generate some code that establishes  the Django project.

From the command line,

This will create a folder ‘mysite ‘ in your current directory. Do avoid python names of different functionalities as names for the folder. Ex. Django and test are not good choices since they create conflict on calling.

Also, with Django you do not put your code under your web server’s root directory. We always place it outside.

Let’s look at what startproject created:

mysite/

manage.py

mysite/

__init__.py

settings.py

urls.py

wsgi.py

 

These files are:

  • The outer mysite/ root directory is just a container for your project. Its name doesn’t matter to Django; you can rename it to anything you like.
  • py: A command-line utility that lets you interact with this Django project in various ways. You can read all the details about manage.py in django-admin and manage.py.
  • The inner mysite/ directory is the actual Python package for your project. Its name is the Python package name you’ll need to use to import anything inside it (e.g. urls).
  • mysite/__init__.py: An empty file that tells Python that this directory should be considered a Python package. If you’re a Python beginner, read more about packages in the official Python docs.
  • mysite/settings.py: Settings/configuration for this Django project. Django settings will tell you all about how settings work.
  • mysite/urls.py: The URL declarations for this Django project; a “table of contents” of your Django-powered site. You can read more about URLs in URL dispatcher.
  • mysite/wsgi.py: An entry-point for WSGI-compatible web servers to serve your project. See How to deploy with WSGI for more details.

 

Let’s verify whether our Django works.

You can safely ignore the warning for now.

 

Yipppe! We  have started the Django server! Let’s go see how it looks like.

 

Note This is only for studying. When it  comes to production setting, use Apache servers or similar.

By default the port is 8000. But you can change it using the command line. Say you want to change it to 8080

Full docs for the development server can be found in the runserver reference

 

But why Python Django?

  • It’s the best choice for quick web development
  • Transparent and clean code.
  • Fewer lines of code
  • Saves time and money on development
  • works well with high loads

Why not Django?

  1. Doesn’t support real-time applications
  2. Broad knowledge of the system is required
  3. ORM is very monolithic compared to SQLAlchemy.

 

To know how to edit and configure your webpages, check out https://docs.djangoproject.com/en/1.11/intro/tutorial01/

 

For more details visit,

https://www.djangoproject.com/

https://docs.djangoproject.com/en/1.11/#index-first-steps

https://www.djangoproject.com/start/

https://docs.djangoproject.com/en/1.11/

https://tutorial.djangogirls.org/en/django_start_project/

 

 

 

 

bodrum escort - eskişehir esc - mersin escort - mersin escort bayan - mersin esc