Dev Shed Forums - Python Programming http://forums.devshed.com/ Python Programming forum discussing coding techniques, tips and tricks, and Zope related information. Python was designed from the ground up to be a completely object-oriented programming language. en Wed, 21 Feb 2018 19:47:12 GMT vBulletin 60 http://forums.devshed.com/images/misc/rss.png Dev Shed Forums - Python Programming http://forums.devshed.com/ Building valgrind in python http://forums.devshed.com/python-programming/980052-building-valgrind-python-new-post.html Thu, 15 Feb 2018 12:57:50 GMT Hi, I am trying to enable valgrind in python 2.7.5. So, i have configured with the following command: *./configure --without-pymalloc --with-pydebug --with-valgrind* I got the below error: *configure: error: Valgrind support requested but headers not available* How can i resolve this error? Can anyone advice me? Hi,

I am trying to enable valgrind in python 2.7.5. So, i have configured with the following command:

./configure --without-pymalloc --with-pydebug --with-valgrind

I got the below error:
configure: error: Valgrind support requested but headers not available

How can i resolve this error? Can anyone advice me? ]]>
Python Programming arunsolo1984 http://forums.devshed.com/python-programming-11/building-valgrind-python-980052.html
Valgrind error in python http://forums.devshed.com/python-programming/980041-valgrind-error-python-new-post.html Wed, 14 Feb 2018 07:38:31 GMT Hi... I am facing the problem python along with valgrind. I was getting default valgrind warnings messages. I have created the sample.py. sample.py file does not have any code. I created empty py file to confirm that my code does not have any memory leaks. Following is the valgrind command that I am using in command line: *valgrind --leak-check=full --show-reachable=yes --error-limit=no... Hi...

I am facing the problem python along with valgrind. I was getting default valgrind warnings messages. I have created the sample.py. sample.py file does not have any code. I created empty py file to confirm that my code does not have any memory leaks.

Following is the valgrind command that I am using in command line:
valgrind --leak-check=full --show-reachable=yes --error-limit=no --gen-suppressions=all --log-file=msm_suppress.log -v /home/arunspra/py_src/Python-2.7.5/python sample.py

I was getting plenty of valgrind warnings.

I surfed in the google and I got to know that i need to configure python by disabling pymalloc. As said by technical peoples, If pymalloc was disabled, we would not got any memory related errors. But i was getting the memory related error. Following is my command that i used to disable pymalloc:

./configure --without-pymalloc --with-pydebug
make

Then I ran above said valgrind command. I was getting 1299 valgrind warnings. If i enable pymalloc, I was getting only 108 valgrind warnings.

Following is my software versions:
Cent os: 7.3
Python: 2.7.5
Valgrind: 3.12.0



PS: If i configure and build python, i was getting an Import Error: No module named netifaces. I am using netifaces in my project. If i use system inbuilt python, i am not getting netifaces import error.

Can anyone please advice me to resolve this issue. ]]>
Python Programming arunsolo1984 http://forums.devshed.com/python-programming-11/valgrind-error-python-980041.html
install pip on windows box / version win 7 :: and getting pip to work http://forums.devshed.com/python-programming/980001-install-pip-windows-box-version-win-7-getting-pip-new-post.html Thu, 08 Feb 2018 14:07:20 GMT dear Python Community, i am running OpenSuse for a decade - and i am very very happy with this. I love Opensuse . Some days ago i wanted to install Python on my computers - a. on a Lieap 42.3 - i am very very glad that i do not have any issues on this machine. b. i need to install python on a windows-box too - since at work - i do not have any linux. Unfortunatley! ;) dear Python Community,


i am running OpenSuse for a decade - and i am very very happy with this. I love Opensuse .


Some days ago i wanted to install Python on my computers -

a. on a Lieap 42.3 - i am very very glad that i do not have any issues on this machine.
b. i need to install python on a windows-box too - since at work - i do not have any linux. Unfortunatley! ;)

so at the office i am on Windows 7

i have installed python 3.6.4


but the first setup was __without__ the correct environment variables

so i choosed the modify_setup

in the advanced options

Quote:

+associate files with python (requies the py launcher)
+create shortcuts for installed apps
+add python to environment variables
precompile standard libraries
download debugging sympols
download debug binaries
well


want to stall the development version of overpy

Install git and run the following commands on the command-line.

Code:

$ git clone https://github.com/DinoTools/python-overpy.git
$ cd python-overpy
$ python setup.py install


But in the Windows CMD this does not work at all - nothing.
And untlll now i was not able to work with pip -

why is this so!?


what can /& what should i do? ]]>
Python Programming gibraltar http://forums.devshed.com/python-programming-11/install-pip-windows-box-version-win-7-getting-pip-980001.html
<![CDATA[How to use Beautiful Soup to extract string in <script> tag?]]> http://forums.devshed.com/python-programming/979980-beautiful-soup-extract-string-script-tag-new-post.html Tue, 06 Feb 2018 18:42:34 GMT Hi there,

for a little programme i want to fetch the data of various plugins of Wordpress: to be concrete it is about 50 plugins
that have each a domain - see below.

the following data are needed: of the "Version", "Acitve installations" and "Tested up to:"

for a list of wordpress-plugins: - approx 50 plugins are of interest!

https://wordpress.org/plugins/wp-job-manager
https://wordpress.org/plugins/ninja-forms
https://wordpress.org/plugins/participants-database and so on and so forth.

These plugins are listed in my favorites - so if i create a login with BS4 then i can log in and parse all those favorite-pages.
The first approach: Otherwise i can loop through a set of URL to fetch all the necessary pages.



i need the data of the following three lines:


see for example:

https://wordpress.org/plugins/wp-job-manager

Version: <strong>1.29.3</strong>
Active installations: <strong>100,000+</strong>
Tested up to: <strong>4.9.4</strong>



we can solve this task with other methods than ousing only BeautifulSoup, but we can do it for example with BS + regular expressions


assuming were able to do this with regular expression we need to locate the script tag in the HTML. The idea is to define a regular expression that would be used for both locating the element with BeautifulSoup and extracting the above mentioned text:

Code:

import re

from bs4 import BeautifulSoup

data = """

       
                        <li>Version: <strong>1.29.3</strong></li>
                        <li>
                                Last updated: <strong><span>6 days</span> ago</strong>                        </li>
                        <li>Active installations: <strong>100,000+</strong></li>

                                                        <li>
                                Requires WordPress Version:<strong>4.3.1</strong>                                </li>
                       
                                                <li>Tested up to: <strong>4.9.4</strong></li>
                       
"""
pattern = re.compile(r'\.val\("([regular expression ]+)"\);', re.MULTILINE | re.DOTALL)
soup = BeautifulSoup(data, "html.parser")

script = soup.find("script", text=pattern)
if script:
    match = pattern.search(script.text)
    if match:
        text = match.group(1)
        print(text )

Prints: text.

Well finally - i want to store the text in a database or a calc-sheet - so it would be great if we can get this in a CVS formate or in an array so that can store it in a db.

Here we are using a simple regular expression for the text but we can go further and be more strict about it but I doubt that would be practically necessary for this problem.


so i have to refine this a bit... ]]>
Python Programming gibraltar http://forums.devshed.com/python-programming-11/beautiful-soup-extract-string-script-tag-979980.html
request at the endpoint of the overpass-api? - with python http://forums.devshed.com/python-programming/979930-request-endpoint-overpass-api-python-new-post.html Mon, 29 Jan 2018 19:16:34 GMT hello dear community, good evening dear Python-experts - since this is a real python-question i think you might be able to help me. we have with Overpass API python wrapper a thin Python wrapper around the OpenStreetMap Overpass API https://github.com/mvexel/overpass-api-python-wrapper hello dear community, good evening dear Python-experts




- since this is a real python-question i think you might be able to help me.


we have with Overpass API python wrapper a thin Python wrapper around the OpenStreetMap Overpass API https://github.com/mvexel/overpass-api-python-wrapper

we have some Simple example:

Quote:

import overpass
api = overpass.API()
response = api.Get('node["name"="Salt Lake City"]')

Note that we don't have to include any of the output meta statements. The wrapper will, well, wrap those.
we will get our result as a dictionary, which represents the JSON output you would get from the Overpass API directly.

Quote:

print [(feature['tags']['name'], feature['id']) for feature in response['elements']]
[(u'Salt Lake City', 150935219), (u'Salt Lake City', 585370637), (u'Salt Lake City', 1615721573)]

we an specify the format of the response. By default, we will get GeoJSON using the responseformat parameter.
Alternatives are plain JSON (json) and OSM XML (xml), as ouput directly by the Overpass API.

Quote:

response = api.Get('node["name"="Salt Lake City"]', responseformat="xml")

question: can we also get cvs - can we perform a request like below with the python wrapper to the endpoint of overpass turbo?

Quote:

[out:csv(::id,::type,"name","addr:postcode","addr:city",
"addr:street","addr:housenumber","website"," contact:email=*")][timeout:30];
area[name="Madrid"]->.a;
( node(area.a)[amenity=hospital];
way(area.a)[amenity=hospital];
rel(area.a)[amenity=hospital];);
out;

well - how to use the above metioned overpass-turbo-request at the endpoint of the overpass-api?


The wrapper returns a dictionary, so if we want something like the CSV output we are looking for, we need to create that from the dictionary that the request outputs.

btw: {{geocodeArea:*}} is not a native Overpass syntax, we should use an area ID.

The request we could feed into the Get method would be something along the lines of:

Quote:

area(3601744366)->.a;
(node(area.a)[amenity=hospital];
way(area.a)[amenity=hospital];);
(._;>;);
]]>
Python Programming gibraltar http://forums.devshed.com/python-programming-11/request-endpoint-overpass-api-python-979930.html
extracting from BS4 and storing as list elements in Python http://forums.devshed.com/python-programming/979929-extracting-bs4-storing-list-elements-python-new-post.html Mon, 29 Jan 2018 18:17:22 GMT hello dear community,


i am currently workin on a little python programme that does some extracting from BS4 and storing as list elements in Python.


As i am fairly new to Python i need some help with that. Nonetheless, I'm trying to write a very simple Spider for web crawling. Here's my first approach:
I need to fetch the data out of this page: Web Filter

Firstly, I do a view on the page source to find HTML elements? view-source:https://europa.eu/youth/volunteering...rganisation_en
i have to extract data wrapped within multiple HTML tags from the above mentioned webpage using BeautifulSoup4.
I have to stored all of the extracted data in a list. But I want each of the extracted data as separate list elements separated by a comma.

here we have the HTML content structure:
Code:

<div class="view-content">
            <div class="row is-flex"></span>
                <div class="col-md-4"></span>
            <div class </span>
  <div class= >
    <h4 Data 1 </span>
          <div class= Data 2</span>
            <p class=
    <i class=
    <strong>Data 3 </span>
</p>    <p class= Data 4 </span>
          <p class= Data 5 </span>
                  <p><strong>Data 6</span>
        <div class=</span>
      <a href="Data 7</span>
  </div>
</div>


well an approach would be:

Code:

from urllib.request import urlopen as uReq
from bs4 import BeautifulSoup as soup
import urllib

my_url ='http://europa.eu/youth/volunteering/evs-organisation_en'
uClient = uReq(my_url)
page_html = uClient.read()
uClient.close()
page_soup = soup(page_html, "html.parser")

cc = page_soup.findAll("td",{"class":""})

for i in range(10):
    print(cc[0+i].text, i)

guess i need some slight changes to code in order to get the thing working.-


Code to extract:

Code:

for data in elem.find_all('span', class_=""):
This should give an output:

Code:

data = [ele.text for ele in soup.find_all('span', {'class':'NormalTextrun'})]
print(data)


Output: [' Data 1 ', ' Data 2 ', ' Data 3 ' and so forth]

question:
/ i need help with the extraction part...


love to hear from you

yours gibraltar ]]>
Python Programming gibraltar http://forums.devshed.com/python-programming-11/extracting-bs4-storing-list-elements-python-979929.html
Is good to start with pyton? http://forums.devshed.com/python-programming/979916-start-pyton-new-post.html Sun, 28 Jan 2018 14:05:03 GMT Hello

I want to know if it's good idea to start learning programming with pyton. In my class we are programing in pascal right now and a want to find something new to stick with it for a while. ]]>
Python Programming Simonleo http://forums.devshed.com/python-programming-11/start-pyton-979916.html