Categories
Beautiful Soup

Web Scraping with Beautiful Soup — Siblings and Parent Nodes

We can get data from web pages with Beautiful Soup.

It lets us parse the DOM and extract the data we want.

In this article, we’ll look at how to scrape HTML documents with Beautiful Soup.

find_parents() and find_parent()

We can find parent elements of a given element with the find_parents method.

The find_parent method returns the first parent element only.

For example, we can write:

from bs4 import BeautifulSoup
import re
html_doc = """<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""

soup = BeautifulSoup(html_doc, 'html.parser')
a_string = soup.find(string="Lacie")
print(a_string.find_parents("a"))

And we get:

[<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>]

logged.

We get the element with the string "Lacie" .

Then we get the parents of that with the find_parents method.

If we replace find_parents with find_parent , then we get:

<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>

printed.

find_next_siblings() and find_next_sibling()

We can call find_next_siblings and find_next_sibling to get the sibling elements of a given element.

For instance, we can write:

from bs4 import BeautifulSoup
import re
html_doc = """<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""

soup = BeautifulSoup(html_doc, 'html.parser')
first_link = soup.a
print(first_link.find_next_siblings("a"))

And then we get the siblings of the first a element.

And so we see:

[<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>, <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]

logged.

If we call find_next_sibling on first_link , then we get:

<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>

find_previous_siblings() and find_previous_sibling()

We can find previous siblings with the find_previous_siblings and find_previous_sibling .

For instance, we can write:

from bs4 import BeautifulSoup
import re
html_doc = """<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""

soup = BeautifulSoup(html_doc, 'html.parser')
last_link = soup.find("a", id="link3")
print(last_link.find_previous_siblings("a"))

Then we call find_previous_siblings to get all the previous links.

So we get:

[<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>, <a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>]

displayed.

find_previous_sibling returns the first result only.

find_all_next() and find_next()

We can call the find_all_next method to return the sibling nodes next to the given node.

For example, we can write:

from bs4 import BeautifulSoup
import re
html_doc = """<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""

soup = BeautifulSoup(html_doc, 'html.parser')
first_link = soup.a
print(first_link.find_all_next(string=True))

Then we get:

[u'Elsie', u',n', u'Lacie', u' andn', u'Tillie', u';nand they lived at the bottom of a well.', u'n', u'...', u'n']

returned.

find_next only returns the first sibling that comes after a node.

Conclusion

We can get siblings and parent nodes with Beautiful Soup.

Categories
Beautiful Soup

Web Scraping with Beautiful Soup — Searching Nodes

We can get data from web pages with Beautiful Soup.

It lets us parse the DOM and extract the data we want.

In this article, we’ll look at how to scrape HTML documents with Beautiful Soup.

Searching Strings with Regex

We can search strings with regex.

For example, we can write:

from bs4 import BeautifulSoup
import re
html_doc = """<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""

soup = BeautifulSoup(html_doc, 'html.parser')
print(soup.find_all(string=re.compile("Dormouse")))

We call re.compile to create our regex.

Also, we can search for strings with a function:

from bs4 import BeautifulSoup
import re
html_doc = """<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""

def is_the_only_string_within_a_tag(s):
    return (s == s.parent.string)

soup = BeautifulSoup(html_doc, 'html.parser')
print(soup.find_all(string=is_the_only_string_within_a_tag))

We get the string from the node with s.parent.string .

s is the string node we’re searching for.

The limit Argument

We can limit the number of items returned with find_all with the limit argument.

For example, we can write:

from bs4 import BeautifulSoup
import re
html_doc = """<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""

soup = BeautifulSoup(html_doc, 'html.parser')
print(soup.find_all("a", limit=2))

And we see:

[<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>, <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>]

logged.

The recursive Argument

We can set whether to search for elements recursively with the recursive argument.

For example, if we want to disable recursive search, we write:

from bs4 import BeautifulSoup
import re
html_doc = """<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""

soup = BeautifulSoup(html_doc, 'html.parser')
print(soup.html.find_all("title", recursive=False))

then we get an empty array since we turn off recursive search.

This is because title has descendants but we turned off recursive search so we won’t get them.

find()

We can find the first element with the given selector with find :

from bs4 import BeautifulSoup
import re
html_doc = """<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""

soup = BeautifulSoup(html_doc, 'html.parser')
print(soup.find('title'))

Then we get:

<title>The Dormouse's story</title>

printed.

We can chain find calls:

from bs4 import BeautifulSoup
import re
html_doc = """<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""

soup = BeautifulSoup(html_doc, 'html.parser')
print(soup.find("head").find("title"))

Conclusion

We can search for various elements with Beautiful Soup.

Categories
Beautiful Soup

Web Scraping with Beautiful Soup — CSS Class and Strings

We can get data from web pages with Beautiful Soup.

It lets us parse the DOM and extract the data we want.

In this article, we’ll look at how to scrape HTML documents with Beautiful Soup.

Searching by CSS Class

We can get an element with the given CSS class with Beautiful Soup.

For example, we can write:

from bs4 import BeautifulSoup
import re

html_doc = """<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
soup = BeautifulSoup(html_doc, 'html.parser')
print(soup.find_all("a", class_="sister"))

We get all the a tags with class sister , so we see:

[<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>, <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>, <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]

printed.

Also, we can search with a regex:

from bs4 import BeautifulSoup
import re

html_doc = """<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
soup = BeautifulSoup(html_doc, 'html.parser')
print(soup.find_all(class_=re.compile("itl")))

We get all the elements with class having the 'itl' substring, so we get:

[<p class="title"><b>The Dormouse's story</b></p>]

printed.

Also, we can set a function:

from bs4 import BeautifulSoup
import re

html_doc = """<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""

def has_six_characters(css_class):
    return css_class is not None and len(css_class) == 6

soup = BeautifulSoup(html_doc, 'html.parser')
print(soup.find_all(class_=has_six_characters))

We set the class_ parameter to the has_six_characters function, so we can get all the elements with classes with 6 characters.

So we see:

[<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>, <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>, <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]

printed.

The class attribute can have more than one value, and we can search for them all.

For example, we can write:

from bs4 import BeautifulSoup

css_soup = BeautifulSoup('<p class="body strikeout"></p>', 'html.parser')
print(css_soup.find_all("p", class_="body strikeout"))

to search for nodes with the class set to body strikeout .

They have to be in the same order.

The string Argument

We can also search for string content instead of tags.

So we can write:

from bs4 import BeautifulSoup
import re
html_doc = """<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""

soup = BeautifulSoup(html_doc, 'html.parser')
print(soup.find_all(string=["Tillie", "Elsie", "Lacie"]))

Then we see:

[u'Elsie', u'Lacie', u'Tillie']

logged.

Conclusion

We can search for elements with the CSS class and strings with Beautiful Soup.

Categories
Beautiful Soup

Web Scraping with Beautiful Soup — Siblings and Selectors

We can get data from web pages with Beautiful Soup.

It lets us parse the DOM and extract the data we want.

In this article, we’ll look at how to scrape HTML documents with Beautiful Soup.

.next_element and .previous_element

We can get sibling elements with the .next_element and .previous_element properties.

For example, we can write:

from bs4 import BeautifulSoup

html_doc = """<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
soup = BeautifulSoup(html_doc, 'html.parser')
last_a_tag = soup.find("a", id="link3")
print(last_a_tag.next_element)

We get the a element with the ID link3 .

Then we get the element next to it with the next_element property.

So we see:

Tillie

printed.

We can also get the previous element with the previous_element property:

from bs4 import BeautifulSoup

html_doc = """<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
soup = BeautifulSoup(html_doc, 'html.parser')
last_a_tag = soup.find("a", id="link3")
print(last_a_tag.previous_element)

And we see:

and

printed.

find_all()

We can find all elements with the given selector with the find_all method.

For example, we can write:

from bs4 import BeautifulSoup

html_doc = """<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
soup = BeautifulSoup(html_doc, 'html.parser')
print(soup.find_all("title"))

to get all the title elements, so we see:

[<title>The Dormouse's story</title>]

printed.

We can get more than one kind of element. For example, we can write:

from bs4 import BeautifulSoup

html_doc = """<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
soup = BeautifulSoup(html_doc, 'html.parser')
print(soup.find_all("p", "title"))

Then we get:

[<p class="title"><b>The Dormouse's story</b></p>]

logged.

The Keyword Arguments

We can pass in other selectors.

For example, we can write:

from bs4 import BeautifulSoup

html_doc = """<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
soup = BeautifulSoup(html_doc, 'html.parser')
print(soup.find_all(id='link2'))

and get the a element with ID link2 .

We can also pass in a regex object to select nodes:

from bs4 import BeautifulSoup
import re

html_doc = """<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
soup = BeautifulSoup(html_doc, 'html.parser')
print(soup.find_all(href=re.compile("elsie")))

We get all the elements with href that has the substring 'elsie' .

So we get:

[<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>]

printed.

We can also search for nodes with the given attributes.

To do that, we write:

from bs4 import BeautifulSoup
import re

soup = BeautifulSoup('<div data-foo="value">foo!</div>', 'html.parser')
print(soup.find_all(attrs={"data-foo": "value"}))

We get the nodes with the data-foo attribute set to value .

So we see:

[<div data-foo="value">foo!</div>]

printed.

To search for node with a given name element value, we can write:

from bs4 import BeautifulSoup

name_soup = BeautifulSoup('<input name="email"/>', 'html.parser')
print(name_soup.find_all(attrs={"name": "email"}))

Then we get:

[<input name="email"/>]

logged.

Conclusion

We can get nodes at various locations and with various attributes with Beautiful Soup.

Categories
Beautiful Soup

Web Scraping with Beautiful Soup — Parent and Sibling Elements

We can get data from web pages with Beautiful Soup.

It lets us parse the DOM and extract the data we want.

In this article, we’ll look at how to scrape HTML documents with Beautiful Soup.

Going Up

We can move up the tree with Beautiful Soup.

For example, we can write:

from bs4 import BeautifulSoup
html_doc = """<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
soup = BeautifulSoup(html_doc, 'html.parser')
title_tag = soup.title
print(title_tag)
print(title_tag.parent)

Then the first print call prints:

<title>The Dormouse's story</title>

And the 2nd print call prints:

<head><title>The Dormouse's story</title></head>

So we see the head tag with the parent property.

.parents

We can iterate over all element’s parents with the .parents property.

For example, we can write:

from bs4 import BeautifulSoup
html_doc = """<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
soup = BeautifulSoup(html_doc, 'html.parser')
link = soup.a
print(link)
for parent in link.parents:
    print(parent.name)

We get the first a element with soup.a .

So the first print call is:

<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>

The 2nd print calls prints all the parents of the a element, so we get:

p
body
html
[document]

Going Sideways

We can get siblings elements with Beautiful Soup.

For example, we can write:

from bs4 import BeautifulSoup

sibling_soup = BeautifulSoup(
    "<a><b>text1</b><c>text2</c></b></a>", 'html.parser')
print(sibling_soup.prettify())

Then we get:

<a>
 <b>
  text1
 </b>
 <c>
  text2
 </c>
</a>

printed.

.next_sibling and .previous_sibling

We can get the next sibling with the .next_sibling property and the previous sibling with the .previous_sibling property.

For example, we can write:

from bs4 import BeautifulSoup

sibling_soup = BeautifulSoup(
    "<a><b>text1</b><c>text2</c></b></a>", 'html.parser')
print(sibling_soup.b.next_sibling)
print(sibling_soup.c.previous_sibling)

We see:

<c>text2</c>
<b>text1</b>

printed from the print calls.

The strings in the tags aren’t siblings since they don’t have the same parent.

.next_siblings and .previous_siblings

We can get multiple siblings with the .next_siblings and .previous_siblings properties.

For example, if we have:

from bs4 import BeautifulSoup

html_doc = """<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
soup = BeautifulSoup(html_doc, 'html.parser')
for sibling in soup.a.next_siblings:
    print(repr(sibling))

Then we see all the siblings next to the first a element:

u',n'
<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>
u' andn'
<a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>
u';nand they lived at the bottom of a well.'

printed.

We can do the same with the previous_siblings property:

from bs4 import BeautifulSoup

html_doc = """<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
soup = BeautifulSoup(html_doc, 'html.parser')
for sibling in soup.find(id="link3").previous_siblings:
    print(repr(sibling))

And we see:

u' andn'
<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>
u',n'
<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>
u'Once upon a time there were three little sisters; and their names weren'

printed.

Conclusion

We can get parent and sibling nodes with Beautiful Soup.