QuerySet API リファレンス

このドキュメントでは、QuerySet API の詳細を説明しています。モデルデータベースクエリ ガイドにある説明を前提としていますので、このドキュメントを読む前にこの 2 つを読んでおいた方がよいでしょう。

このリファレンスでは、 データベースクエリガイド で提供された Blogモデルの例 を使用します。

QuerySet が評価されるタイミング

内部的には、 QuerySet は実際にデータベースにアクセスすることなく、構築、フィルタリング、スライス、受け渡しを行うことができます。クエリセットを評価するための操作が行われない限り、実際のデータベースへのアクセスは発生しません。

あなたは次のような方法で QuerySet を評価することができます:

  • イテレーション。 QuerySet はイテラブルで、初めてイテレートした時にデータベースのクエリを実行します。たとえば、これはデータベースにある全エントリのheadline属性を出力するプログラムです:

    for e in Entry.objects.all():
        print(e.headline)
    

    メモ: 単に1つ以上の結果が存在するかどうかを決定したいだけなら、これを使わないでください。 exists() を使う方がより効率的です。

  • 非同期イテレーション. QuerySetasync for を使って反復処理することもできます:

    async for e in Entry.objects.all():
        results.append(e)
    

    クエリセットの同期的・非同期的イテレータは、同じキャッシュを共有します。

    Changed in Django 4.1:

    非同期イテレーションのサポートが追加されました。

  • スライス。 QuerySet の要素数を制限する で説明されているとおり、 QuerySet はPythonのリストスライスを用いてスライス可能です。未評価の QuerySet をスライスすると、通常は新たな未評価の QuerySet が返されます。しかし、スライスの "step" パラメータを使用した場合、Djangoはデータベースクエリを実行し、リストを返します。評価された QuerySet をスライスした場合も同様にリストが返されます。

    未評価の QuerySet をスライスして別の未評価の QuerySet が返されても、それをさらに変更すること(たとえば、さらにフィルタを追加したり、順序を変更したりすること)は許されていないことに気を付けてください。これは、その操作がSQLに正しく変換されず、明確な意味を持たないためです。

  • Pickle 化/キャッシュ化。 詳細については、後述の pickling QuerySets を参照してください。結果がデータベースから読み出されることが、このセクションの目的として重要なことです。

  • repr()。 QuerySetrepr() が呼び出された時点で評価されます。これはPythonの対話型インタプリタでの利便性を図るためで、APIを対話的に使用する際にクエリの結果をすぐに確認することができます。

  • len()。 QuerySetlen() を呼び出した時点で評価されます。想像される通り、この操作は結果のリストの長さを返します。

    メモ: セット内のレコード数を決定したいだけであれば(そして実際のオブジェクトが必要ないのであれば)、SQLの SELECT COUNT(*) を使ってデータベースレベルでハンドルする方がより効率的です。Djangoはまさにこの理由から count() メソッドを提供しています。

  • list()。 list() を呼び出すことで、 QuerySet の評価を強制します。たとえば:

    entry_list = list(Entry.objects.all())
    
  • bool()。 bool(), or, and または if 文を使用してブール値として QuerySet をテストすると、クエリが実行されます。 QuerySet も少なくとも1つ以上の結果が含まれれば True となり、そうでなければ False になります。例えば:

    if Entry.objects.filter(headline="Test"):
        print("There is at least one Entry with the headline Test")
    

    注意: もしクエリの結果が少なくとも1つ存在するかどうかを確認したいだけであれば(そして実際のオブジェクトを必要としないのであれば)、 exists() を使うべきです。

Pickling QuerySets

QuerySetpickle 化するとき、pickle化の前にすべての結果がメモリにロードされるように強制されます。キャッシュされたクエリセットがリロードされた時、結果がすでに存在し、使用できる状態になっていることが望ましいからです(データベースからの読み込みには時間がかかるので、キャッシュとしての目的を達成できません)。つまり、 QuerySet のpickle化を解除すると、解除した時点でデータベースにある結果ではなく、pickle化した時点での結果が出力されることになります。

もし、後でデータベースから QuerySet を再生成するために必要な情報だけを取り出したい場合は、 QuerySetquery を属性を取り出してください。そうすることで、以下のようなコードで本来の QuerySet (結果を読み込む前の状態)を再現することができます:

>>> import pickle
>>> query = pickle.loads(s)  # Assuming 's' is the pickled string.
>>> qs = MyModel.objects.all()
>>> qs.query = query  # Restore the original 'query'.

query 属性は不透明なオブジェクトです。これは内側でのクエリ構築を表すもので、公開APIの一部ではありません。しかし、ここで説明しているように、この属性の内容のpickle化・pickle化の解除は安全に行うことができます(完全にサポートもされています)。

QuerySet.values_list() における制限

pickle化された query 属性を使って QuerySet.values_list() を再生成すると、返り値は QuerySet.values() に置き換えられます:

>>> import pickle
>>> qs = Blog.objects.values_list("id", "name")
>>> qs
<QuerySet [(1, 'Beatles Blog')]>
>>> reloaded_qs = Blog.objects.all()
>>> reloaded_qs.query = pickle.loads(pickle.dumps(qs.query))
>>> reloaded_qs
<QuerySet [{'id': 1, 'name': 'Beatles Blog'}]>

バージョン間でpickle化されたデータを共有することはできません

QuerySets をpickle化したデータは、生成したDjangoの同一バージョンでのみ有効です。DjangoのバージョンNで生成したデータをバージョンN+1でも正常に読み込める保証はありません。Pickle化は長期的なアーカイブ戦略の手段として用いるべきではありません。

pickle の互換性エラーは、静的に衝突したオブジェクトのように判定が難しいことがあるので、モデルをpickle化したデータを別のバージョンで復元しようとすると RuntimeWarning が送出されます。

QuerySet API

これが QuerySet の正式な宣言です:

class QuerySet(model=None, query=None, using=None, hints=None)

通常、 QuerySet を操作する際には フィルターのチェーン を使用します。これを実現するために、ほとんどの QuerySet のメソッドは新たなクエリセットを返します。これらのメソッドについては、このセクションで後ほど詳しく説明します。

QuerySet クラスは、イントロスペクションのために以下のパブリックな属性を持っています:

ordered

QuerySetorder_by() やモデルのデフォルトの順序指定によって並び替えられた場合に True となります。それ以外のときは False になります。

db

このクエリが実行されるデータベースを示します。

注釈

QuerySetquery パラメータは、特殊なクエリのサブクラスが内部のクエリ状態を再構築できるようにするために存在します。このパラメータの値はクエリの状態の不透明な表現であり、パブリックAPIの一部ではありません。

新しい QuerySets を返すメソッド

QuerySet が返す結果の種類や、SQLクエリの実行方法を変更するための、さまざまな QuerySet の改良メソッドをDjangoは提供します。

注釈

これらのメソッドはデータベースクエリを実行しないので、非同期コードで実行しても 安全 であり、非同期処理専用のメソッドは存在しません。

filter()

filter(*args, **kwargs)

与えられた検索パラメータにマッチする新しい QuerySet を返します。

検索パラメータ (**kwargs) は以下の Field lookups で説明されているフォーマットに従わなければなりません。複数のパラメータは、元となるSQLステートメントでは AND によって結合されます。

より複雑なクエリを実行したい場合(たとえば OR ステートメントを含むクエリ)は、 Q objects (*args) を使用してください。

exclude()

exclude(*args, **kwargs)

与えられた検索パラメータにマッチ しない 新しい QuerySet を返します。

検索パラメータ (**kwargs) は以下の Field lookups で説明されているフォーマットに従わなければなりません。複数のパラメータは、元となるSQLステートメントでは AND によって結合され、全体が NOT() によって囲まれます。

この例では pub_date が 2005-1-3より新しく、 headline が "Hello" であるようなエントリーを除外しています:

Entry.objects.exclude(pub_date__gt=datetime.date(2005, 1, 3), headline="Hello")

SQL文では、次のように評価されます:

SELECT ...
WHERE NOT (pub_date > '2005-1-3' AND headline = 'Hello')

この例では pub_date が 2005-1-3より新しいか、 headline が "Hello" であるようなエントリーを除外しています:

Entry.objects.exclude(pub_date__gt=datetime.date(2005, 1, 3)).exclude(headline="Hello")

SQL文では、次のように評価されます:

SELECT ...
WHERE NOT pub_date > '2005-1-3'
AND NOT headline = 'Hello'

2つ目の例の方が、制約がより強いことに留意してください。

より複雑なクエリを実行したい場合(たとえば OR ステートメントを含むクエリ)は、 Q objects (*args) を使用してください。

annotate()

annotate(*args, **kwargs)

query expressions で提供されるリストに従い、 QuerySet の各オブジェクトに集計情報を付加します。式としては、シンプルな値、モデル(または関連モデル)のフィールド参照、あるいは QuerySet 内のオブジェクトに関連するオブジェクトに対して計算された集計式(平均、合計など)が含まれます。

annotate() の引数は、それぞれが返り値となる QuerySet 内の各オブジェクトに追加される集計情報となります。

Djangoが提供する集計関数については、 Aggregation Functions で説明されています。

キーワード引数を用いて集計情報を定義した場合、キーワードが集計情報のエイリアスとして用いられます。位置引数を用いた場合、使用した集計関数と集計されるモデルフィールドの名前に基づいてエイリアスが生成されます。単一のフィールドを参照する集計式であれば位置引数を利用できます。それ以外のすべての集計式は、キーワード引数を用いなくてはなりません。

たとえば、ブログのリストを操作しているときに、ブログごとのエントリー数を決定したいとします:

>>> from django.db.models import Count
>>> q = Blog.objects.annotate(Count("entry"))
# The name of the first blog
>>> q[0].name
'Blogasaurus'
# The number of entries on the first blog
>>> q[0].entry__count
42

Blog モデル自体は entry__count 属性を定義しませんが、集計式を指定したキーワード引数を用いることで、集計情報の名前を制御できます:

>>> q = Blog.objects.annotate(number_of_entries=Count("entry"))
# The number of entries on the first blog, using the name provided
>>> q[0].number_of_entries
42

集計処理についての深い議論については、 アグリゲーションについてのトピックガイド を確認してください。

alias()

alias(*args, **kwargs)

annotate() と同じですが、 QuerySet にオブジェクトをアノテーションするかわりに、後で他の QuerySet メソッドで再利用できるように式を保存します。これは式の結果自体は必要ないが、フィルタリングやソート、あるいは複雑な式の一部として利用する場合に便利です。未使用の値を選択しないことで、データベースで冗長な処理を行わずに済み、結果的にパフォーマンスを向上させることができます。

例えば、5エントリ以上のブログを探したいが、エントリ数自体に興味がない場合は、以下のようにできます:

>>> from django.db.models import Count
>>> blogs = Blog.objects.alias(entries=Count("entry")).filter(entries__gt=5)

alias()annotate(), exclude(), filter(), order_by(), update() と組み合わせて使用することができます。エイリアス式をその他のメソッド(aggregate() など)と組み合わせるためには、アノテーションを用いる必要があります

Blog.objects.alias(entries=Count("entry")).annotate(
    entries=F("entries"),
).aggregate(Sum("entries"))

filter()order_by() は式を直接受け取ることができますが、式の構築と評価は同じ場所では行われないことが多いです(例えば、 QuerySet メソッドは式を作成し、後からビューを表示するときに使用されるため)。 alias() は、複数のメソッドやモジュールにまたがる複雑な式を段階的に構築することができ、式の部分をエイリアスで参照し、最終結果に対してのみ annotate() を使用する、といった使い方ができます。

order_by()

order_by(*fields)

デフォルトでは、 QuerySet の返り値はモデルの Meta 内の ordering オプションで指定されたタプルに基づいて並び替えられます。 order_by メソッドを使うことで、 QuerySet ごとにこれをオーバーライドすることができます。

実装例:

Entry.objects.filter(pub_date__year=2005).order_by("-pub_date", "headline")

上のコードの結果は pub_date の降順、次に headline の昇順で並び替えられます。 "-pub_date" のように、前にマイナス符号をつけることで降順を表現します。昇順は暗黙的に表現されます。ランダムに並び替えたい場合、次のように "?" を使います:

Entry.objects.order_by("?")

メモ: order_by('?') クエリは、使用するデータベースバックエンドによっては高負荷で遅くなる可能性があります。

異なるモデルのフィールドで並び替えたい場合、モデル間を横断して参照するクエリを発行するときと同じ構文を使用します。すなわち、フィールド名の後にダブルアンダースコア(__)を続けて、その後に新たなモデルのフィールド名を続けます。そして、それを結合したいモデルの数だけ繰り返します。例えば:

Entry.objects.order_by("blog__name", "headline")

異なるモデルを参照するフィールドで並び替えるとき、Djangoは参照先のモデルのデフォルトの順序を用いますが、 Meta.ordering が設定されていなければ参照先のモデルのプライマリーキーで並び替えます。たとえば、 Blog モデルにはデフォルトで設定された順序がないとき:

Entry.objects.order_by("blog")

...は以下と同じです:

Entry.objects.order_by("blog__id")

Blogordering = ['name'] を保持している場合、最初のクエリセットは以下と同じになります:

Entry.objects.order_by("blog__name")

asc()desc() を式中で呼び出すことで、 query expressions を使うこともできます:

Entry.objects.order_by(Coalesce("summary", "headline").desc())

asc()desc() は、null値をどのようにソートするかを制御する引数 (nulls_firstnulls_last)をとります。

モデル参照フィールドによる並び替えと同時に distinct() を使用する際は注意してください。参照先のモデルの順序によって、期待される結果がどのように変化するかについては、 distinct() の注記を確認してください。

注釈

複数の値をとりうるフィールドを指定し、結果を並び替えることは許されています(たとえば、 ManyToManyField フィールド、もしくは ForeignKey フィールドの逆参照など)。

このケースを考えます:

class Event(Model):
    parent = models.ForeignKey(
        "self",
        on_delete=models.CASCADE,
        related_name="children",
    )
    date = models.DateField()


Event.objects.order_by("children__date")

ここで、それぞれの Event に対して、複数の並べ替えデータが存在する可能性があります; 複数の children を伴う Event は、order_by() が作る新たな QuerySet においてそれぞれ複数回返されることになります。言い換えれば、 QuerySetorder_by() を使うことで、もともと作業していたよりも多くの項目を返してしまう可能性があります。これはおそらく予期されることはなく、有用でもないでしょう。

従って、複数の値をとりうるフィールドを結果の並び替えに用いる際は気を付けてください。 もし仮に 並び替える項目ごとに1つのデータしか存在しないのであれば、この方法でも問題はないでしょう。そうでなければ、結果が期待通りになることを確認してください。

大文字と小文字を区別して並べ替えるかどうかを指定することはできません。Djangoは使用するデータベースバックエンドが通常このCase-sensitiveをどのように扱うかに従って結果を並び替えます。

Lower によって小文字に変換したフィールドで並び替えることで、一貫したルールでの並び替えを実現できます:

Entry.objects.order_by(Lower("headline").desc())

クエリに対し、デフォルトの順序付けも含めて並び替えを適用したくない場合、パラメータを指定せずに order_by() を呼び出してください。

クエリに並び替えが適用されたかどうかは、 QuerySet.ordered 属性を確認することで知ることができます。 QuerySet がなんらかの方法で並び替えられれば、この属性の値は True となります。

order_by() の呼び出しごとに、過去の並び替えは解除されます。たとえば、以下のクエリでは並び替えに pub_date が使われ、 headline は使われません:

Entry.objects.order_by("headline").order_by("pub_date")

警告

並べ替えは自由な操作ではありません。並べ替えの条件にフィールドを指定するたびに、データベースへのコストが発生します。指定した外部キーは、すべてのデフォルトの並べ替え条件を暗黙のうちに含んでいます。

クエリに順序の指定が含まれていなければ、順序が指定されない状態の結果がデータベースから返されます。特定の順序が保証されるのは、結果内の各オブジェクトを、一意に識別するフィールドの組み合わせで順序を指定した場合のみです。例えば、 name フィールドが一意でない場合、そのフィールドで順序を決めても、同じ名前を持つオブジェクトが常に同じ順序で表示されることは保証されません。

reverse()

reverse()

reverse() メソッドを使用すると、クエリセットの要素を返す順序を逆にすることができます。再度 reverse() を呼び出すと、順序が元に戻ります。

クエリセットの「最後の」5つの項目を取り出すには、次のようにします:

my_queryset.reverse()[:5]

この処理がPythonでシーケンスの最後からスライスするのとは全く違うことに注意してください。上記の例では、まず最後の項目が返され、次に最後から5番目の項目が返されます。Python のシーケンスに対して seq[-5:] を参照すると、最後の 5 番目の項目が最初に表示されるはずです。そのようなアクセスモード (末尾からのスライス) は、SQL で効率的に行うことができないため、Django ではサポートされていません。

また、一般的に reverse() は、順序が定義されている QuerySet に対してのみ呼び出すべきであることに注意してください(例えば、デフォルトの順序を定義しているモデルに対するクエリや、 order_by() `を使用する場合など)。もし、ある ``QuerySet`() に対してそのような順序が定義されていない場合、 reverse() を呼び出しても何の効果もありません(順序は reverse() を呼び出す前から未定義であり、その後も未定義のままです)。

distinct()

distinct(*fields)

SQL クエリで SELECT DISTINCT を使用した新しい QuerySet を返します。これにより、クエリ結果から重複した行を取り除くことができます。

デフォルトでは、 QuerySet は重複した行を削除しません。なぜなら、 Blog.objects.all() のような単純なクエリでは、結果の行が重複する可能性はないからです。しかし、クエリが複数のテーブルにまたがっている場合、 QuerySet が評価されたときに重複した結果を得る可能性があります。このような場合は distinct() を使用します。

注釈

meth:order_by の呼び出しで使用されるフィールドはすべて、SQL の SELECT 列に含まれます。これは distinct() と組み合わせて使用すると、時に予期せぬ結果をもたらすことがあります。関連するモデルのフィールドでソートした場合、それらのフィールドが SELECT の対象に追加され、重複した行が重複していないように出力されるかもしれません。余分なカラムは返される結果には現れないので (カラムは順序付けをサポートするためだけに存在するため)、重複した結果が返されているように見えることがあります。

同様に、 values() クエリを使用して選択するカラムを制限した場合、 order_by() (またはデフォルトのモデルの順序付け)で使用したカラムが残存し、結果の一意性に影響する可能性があります。

この問題の解決策は、``distinct() ``を使用する場合、関連するモデルによる順序付けに注意することです。同様に、``distinct() ``values() `() を一緒に使う場合、 values() `() の呼び出しに含まれないフィールドによる順序付けに注意する必要があります。

PostgreSQL のみ、位置引数 (*fields) を渡して、 DISTINCT を適用するフィールドの名前を指定することができます。これは SELECT DISTINCT ON というSQLクエリに相当します。通常の distinct() 呼び出しでは、データベースはどの行が区別されるかを判断する際に、各行の each フィールドを比較しますが、フィールド名を指定した distinct() の呼び出しでは、データベースは指定されたフィールド名のみを比較することができます。

注釈

フィールド名を指定する場合、QuerySetorder_by() を指定する必要があり、 order_by() のフィールドは distinct() のフィールドと同じ順序で始まる必要があります。

例えば、SELECT DISTINCT ON (a) とすると、列 a の各値の最初の行が得られます。もし順序を指定しなければ、任意の行を得ることができます。

例 (2つ目以降のコードは、PostgreSQL上でのみ動作します):

>>> Author.objects.distinct()
[...]

>>> Entry.objects.order_by("pub_date").distinct("pub_date")
[...]

>>> Entry.objects.order_by("blog").distinct("blog")
[...]

>>> Entry.objects.order_by("author", "pub_date").distinct("author", "pub_date")
[...]

>>> Entry.objects.order_by("blog__name", "mod_date").distinct("blog__name", "mod_date")
[...]

>>> Entry.objects.order_by("author", "pub_date").distinct("author")
[...]

注釈

meth:order_by は、定義されている既定の関連モデルの順序を使用することに留意してください。ORDER BY`` 節の先頭にある DISTINCT ON 式が一致するように、明示的に _id や参照するフィールドで順序付けする必要があるかもしれません。例えば、 Blog モデルが orderingname: と定義していた場合:

Entry.objects.order_by("blog").distinct("blog")

というのは、クエリが blog__name によって順序付けされるため、DISTINCT ON 式と不一致になって正しい結果を得られないでしょう。両方の式が一致するように、リレーションの _id フィールド(この場合は blog_id )または参照されるフィールド( blog__pk )によって明示的に順序付けする必要があります。

values()

values(*fields, **expressions)

イテラブルオブジェクトとして使用するとき、モデルインスタンスではなく辞書を返す QuerySet を返します。

これらの辞書はそれぞれオブジェクトを表し、キーはモデルオブジェクトの属性名に対応しています。

この例では、values() によって得られる辞書と通常のモデルのオブジェクトを比較しています:

# This list contains a Blog object.
>>> Blog.objects.filter(name__startswith="Beatles")
<QuerySet [<Blog: Beatles Blog>]>

# This list contains a dictionary.
>>> Blog.objects.filter(name__startswith="Beatles").values()
<QuerySet [{'id': 1, 'name': 'Beatles Blog', 'tagline': 'All the latest Beatles news.'}]>

values()`` メソッドはオプションの位置引数 *fields を取り、 SELECT で絞り込むフィールド名を指定します。フィールドを指定した場合、それぞれの辞書は指定したフィールドのキー/値のみを保有します。フィールドを指定しない場合、各ディクショナリは、データベーステーブルのすべてのフィールドのキーと値を保有します。

Example:

>>> Blog.objects.values()
<QuerySet [{'id': 1, 'name': 'Beatles Blog', 'tagline': 'All the latest Beatles news.'}]>
>>> Blog.objects.values("id", "name")
<QuerySet [{'id': 1, 'name': 'Beatles Blog'}]>

values() ``メソッドはオプションでキーワード引数 ``**expressions を受け取り、 annotate(): に渡すこともできます:

>>> from django.db.models.functions import Lower
>>> Blog.objects.values(lower_name=Lower("name"))
<QuerySet [{'lower_name': 'beatles blog'}]>

ソートには、ビルトインのルックアップまたは :doc:`カスタムルックアップ </howto/custom-lookups> ` を使用することができます。例えば、次のようになります:

>>> from django.db.models import CharField
>>> from django.db.models.functions import Lower
>>> CharField.register_lookup(Lower)
>>> Blog.objects.values("name__lower")
<QuerySet [{'name__lower': 'beatles blog'}]>

values() 句内の集計処理は、同じ values() 句内の他の引数の前に適用されます。別の値でグループ化する必要がある場合は、その値を先の values() 句に追加してください。例えば:

>>> from django.db.models import Count
>>> Blog.objects.values("entry__authors", entries=Count("entry"))
<QuerySet [{'entry__authors': 1, 'entries': 20}, {'entry__authors': 1, 'entries': 13}]>
>>> Blog.objects.values("entry__authors").annotate(entries=Count("entry"))
<QuerySet [{'entry__authors': 1, 'entries': 33}]>

いくつかの際どいポイントについて言及しておきます:

  • もし foo というフィールドが ForeignKey である場合、デフォルトの values()foo_id という辞書キーを返します。これは実際の値を格納する隠しモデルの属性名であるため (foo 属性が関連モデルを参照します)。value()`` を呼び出してフィールド名を渡す場合、foofoo_id のどちらを渡しても、同じものが返ってきます(辞書のキーは渡したフィールド名と一致します)。

    例えば:

    >>> Entry.objects.values()
    <QuerySet [{'blog_id': 1, 'headline': 'First Entry', ...}, ...]>
    
    >>> Entry.objects.values("blog")
    <QuerySet [{'blog': 1}, ...]>
    
    >>> Entry.objects.values("blog_id")
    <QuerySet [{'blog_id': 1}, ...]>
    
  • values()distinct()`() を併用する場合、順序が結果に影響することがあるので注意が必要です。詳細は distinct() の注釈を参照してください。

  • meth:extra()`呼び出しの後に ``values()` 関数を使用する場合、 extra()select 引数で定義したフィールドは、明示的に values() `` 呼び出しに含めなければなりません。 ``values() 呼び出しの後に extra() を呼び出しても、後から追加した選択フィールドは返り値に含まれません。

  • values() の後に only()defer() を呼び出すのは意味がなく、 TypeError が発生します。

  • Combining transforms and aggregates requires the use of two annotate() calls, either explicitly or as keyword arguments to values(). As above, if the transform has been registered on the relevant field type the first annotate() can be omitted, thus the following examples are equivalent:

    >>> from django.db.models import CharField, Count
    >>> from django.db.models.functions import Lower
    >>> CharField.register_lookup(Lower)
    >>> Blog.objects.values("entry__authors__name__lower").annotate(entries=Count("entry"))
    <QuerySet [{'entry__authors__name__lower': 'test author', 'entries': 33}]>
    >>> Blog.objects.values(entry__authors__name__lower=Lower("entry__authors__name")).annotate(
    ...     entries=Count("entry")
    ... )
    <QuerySet [{'entry__authors__name__lower': 'test author', 'entries': 33}]>
    >>> Blog.objects.annotate(entry__authors__name__lower=Lower("entry__authors__name")).values(
    ...     "entry__authors__name__lower"
    ... ).annotate(entries=Count("entry"))
    <QuerySet [{'entry__authors__name__lower': 'test author', 'entries': 33}]>
    

これは、値が欲しい利用可能フィールドが少なく、モデルインスタンスオブジェクトの機能が必要ないことが分かっている場合に便利です。使用する必要のあるフィールドだけを選択する方がより効率的です。

最後に、values() の後に filter()order_by() などを呼び出すことができますが、これはこの2つの呼び出しが同じであることを意味しています:

Blog.objects.values().order_by("id")
Blog.objects.order_by("id").values()

Django を作った人たちは、SQL に影響するメソッドを最初に置き、その後に出力に影響するメソッド (values() など) を (オプションで) 置くことを好みますが、それは本当に重要ではありません。しかし、そんなことはどうでもいいのです。この機会に、あなたの個性を存分に発揮してください。

You can also refer to fields on related models with reverse relations through OneToOneField, ForeignKey and ManyToManyField attributes:

>>> Blog.objects.values("name", "entry__headline")
<QuerySet [{'name': 'My blog', 'entry__headline': 'An entry'},
     {'name': 'My blog', 'entry__headline': 'Another entry'}, ...]>

警告

ManyToManyField 属性と逆リレーションは複数の関連行を持つので、これらを含めると結果のサイズが数倍になることがあります。これは、 values() クエリに複数のフィールドを含めると特に顕著で、その場合、考えられるすべての組み合わせが返されることになります。

SQLiteにおける JSONField の特殊な値

SQLiteでは、 JSON_EXTRACTJSON_TYPE は実装されていますが、 BOOLEAN データ型がないため、 values()JSONField のキーを変換する際に true, False, None の代わりに "true", "false", "Null" の文字列を返します。

values_list()

values_list(*fields, flat=False, named=False)

この関数は values() と似ていますが、辞書ではなくタプルのを返します。それぞれのタプルには values_list() に渡されたたフィールドまたは式の値が、渡された順番に含まれます。例えば:

>>> Entry.objects.values_list("id", "headline")
<QuerySet [(1, 'First entry'), ...]>
>>> from django.db.models.functions import Lower
>>> Entry.objects.values_list("id", Lower("headline"))
<QuerySet [(1, 'first entry'), ...]>

フィールドを1つだけ渡す場合は、 flat パラメータを渡すことができます。これに True を渡すと、返り値はタプルではなく単一の値になります。この違いは例を見て理解してください:

>>> Entry.objects.values_list("id").order_by("id")
<QuerySet[(1,), (2,), (3,), ...]>

>>> Entry.objects.values_list("id", flat=True).order_by("id")
<QuerySet [1, 2, 3, ...]>

複数のフィールドがある場合に flat を渡すとエラーになります。

namedtuple(): として結果を得るには、named=True を渡します:

>>> Entry.objects.values_list("id", "headline", named=True)
<QuerySet [Row(id=1, headline='First entry'), ...]>

名前付きタプルを使用することで、名前付きタプルに変換するためのわずかなパフォーマンスの低下を犠牲にしても、クエリの実行結果の可読性を高めることができます。

もし values_list() に何も値を渡さなければ、モデル内のすべてのフィールドを宣言された順に返します。

よくあるニーズは、特定のモデルインスタンスの特定のフィールドの値を取得することです。これを実現するには、 values_list() の後に get() を呼び出します:

>>> Entry.objects.values_list("headline", flat=True).get(pk=1)
'First entry'

values()values_list() はいずれも、モデルインスタンスを作成するオーバーヘッドなしにデータのサブセットを取得するという、特定のユースケースに対する最適化を意図したものです。このメタファーは、多対多の関係やその他の多値の関係(逆引き外部キーの1対多の関係など)を扱うときには、「1行に1オブジェクト」の前提が成り立たないために崩れてしまいます。

例えば、 ManyToManyField: を通してクエリを実行したときの挙動に注目してみましょう:

>>> Author.objects.values_list("name", "entry__headline")
<QuerySet [('Noam Chomsky', 'Impressions of Gaza'),
 ('George Orwell', 'Why Socialists Do Not Believe in Fun'),
 ('George Orwell', 'In Defence of English Cooking'),
 ('Don Quixote', None)]>

複数のエントリを持つ著者は複数回表示され、エントリのない著者のエントリとして None と表示されます。

同様に、外部キーを逆引きするクエリでは、著者が設定されていないエントリに対して None が表示されます。

>>> Entry.objects.values_list("authors")
<QuerySet [('Noam Chomsky',), ('George Orwell',), (None,)]>

SQLiteにおける JSONField の特殊な値

SQLiteでは、 JSON_EXTRACTJSON_TYPE は実装されていますが、 BOOLEAN データ型がないため、 values()JSONField のキーを変換する際に true, False, None の代わりに "true", "false", "Null" の文字列を返します。

dates()

dates(field, kind, order='ASC')

Returns a QuerySet that evaluates to a list of datetime.date objects representing all available dates of a particular kind within the contents of the QuerySet.

field should be the name of a DateField of your model. kind should be either "year", "month", "week", or "day". Each datetime.date object in the result list is "truncated" to the given type.

  • "year" returns a list of all distinct year values for the field.
  • "month" returns a list of all distinct year/month values for the field.
  • "week" returns a list of all distinct year/week values for the field. All dates will be a Monday.
  • "day" returns a list of all distinct year/month/day values for the field.

order, which defaults to 'ASC', should be either 'ASC' or 'DESC'. This specifies how to order the results.

例:

>>> Entry.objects.dates("pub_date", "year")
[datetime.date(2005, 1, 1)]
>>> Entry.objects.dates("pub_date", "month")
[datetime.date(2005, 2, 1), datetime.date(2005, 3, 1)]
>>> Entry.objects.dates("pub_date", "week")
[datetime.date(2005, 2, 14), datetime.date(2005, 3, 14)]
>>> Entry.objects.dates("pub_date", "day")
[datetime.date(2005, 2, 20), datetime.date(2005, 3, 20)]
>>> Entry.objects.dates("pub_date", "day", order="DESC")
[datetime.date(2005, 3, 20), datetime.date(2005, 2, 20)]
>>> Entry.objects.filter(headline__contains="Lennon").dates("pub_date", "day")
[datetime.date(2005, 3, 20)]

datetimes()

datetimes(field_name, kind, order='ASC', tzinfo=None, is_dst=None)

Returns a QuerySet that evaluates to a list of datetime.datetime objects representing all available dates of a particular kind within the contents of the QuerySet.

field_name should be the name of a DateTimeField of your model.

kind should be either "year", "month", "week", "day", "hour", "minute", or "second". Each datetime.datetime object in the result list is "truncated" to the given type.

order, which defaults to 'ASC', should be either 'ASC' or 'DESC'. This specifies how to order the results.

tzinfo defines the time zone to which datetimes are converted prior to truncation. Indeed, a given datetime has different representations depending on the time zone in use. This parameter must be a datetime.tzinfo object. If it's None, Django uses the current time zone. It has no effect when USE_TZ is False.

is_dst indicates whether or not pytz should interpret nonexistent and ambiguous datetimes in daylight saving time. By default (when is_dst=None), pytz raises an exception for such datetimes.

バージョン 4.0 で非推奨: The is_dst parameter is deprecated and will be removed in Django 5.0.

注釈

This function performs time zone conversions directly in the database. As a consequence, your database must be able to interpret the value of tzinfo.tzname(None). This translates into the following requirements:

none()

none()

none() を呼び出すと、オブジェクトを返さないクエリセットが作成され、結果にアクセスする際にクエリが実行されることはありません。 qs.none() のクエリセットは EmptyQuerySet のインスタンスです。

例:

>>> Entry.objects.none()
<QuerySet []>
>>> from django.db.models.query import EmptyQuerySet
>>> isinstance(Entry.objects.none(), EmptyQuerySet)
True

all()

all()

現在の QuerySet (または QuerySet サブクラス)の コピー を返します。これは、モデルマネージャまたは QuerySet のどちらかを渡して、その結果に対してさらにフィルタリングを行いたいときに便利です。いずれかのオブジェクトに対して all() を呼び出すと、動作する QuerySet が必ず作成されます。

QuerySet評価される とき、通常その結果はキャッシュされます。もし QuerySet が評価された後にデータベースのデータが更新された場合、以前に評価された QuerySet に対して``all()`` を呼び出すことで、同じクエリの最新の結果を取得することができます。

union()

union(*other_qs, all=False)

SQLの UNION コマンドを使用して、2つ以上の QuerySet の結果を結合します。例えば:

>>> qs1.union(qs2, qs3)

UNION 演算子は、デフォルトでは明確な値のみを選択します。重複した値を取得するには、 all=True 引数を使用します。

union(), intersection(), and difference() return model instances of the type of the first QuerySet even if the arguments are QuerySets of other models. Passing different models works as long as the SELECT list is the same in all QuerySets (at least the types, the names don't matter as long as the types are in the same order). In such cases, you must use the column names from the first QuerySet in QuerySet methods applied to the resulting QuerySet. For example:

>>> qs1 = Author.objects.values_list("name")
>>> qs2 = Entry.objects.values_list("headline")
>>> qs1.union(qs2).order_by("name")

In addition, only LIMIT, OFFSET, COUNT(*), ORDER BY, and specifying columns (i.e. slicing, count(), exists(), order_by(), and values()/values_list()) are allowed on the resulting QuerySet. Further, databases place restrictions on what operations are allowed in the combined queries. For example, most databases don't allow LIMIT or OFFSET in the combined queries.

intersection()

intersection(*other_qs)

Uses SQL's INTERSECT operator to return the shared elements of two or more QuerySets. For example:

>>> qs1.intersection(qs2, qs3)

See union() for some restrictions.

difference()

difference(*other_qs)

Uses SQL's EXCEPT operator to keep only elements present in the QuerySet but not in some other QuerySets. For example:

>>> qs1.difference(qs2, qs3)

See union() for some restrictions.

extra()

extra(select=None, where=None, params=None, tables=None, order_by=None, select_params=None)

Sometimes, the Django query syntax by itself can't easily express a complex WHERE clause. For these edge cases, Django provides the extra() QuerySet modifier — a hook for injecting specific clauses into the SQL generated by a QuerySet.

Use this method as a last resort

This is an old API that we aim to deprecate at some point in the future. Use it only if you cannot express your query using other queryset methods. If you do need to use it, please file a ticket using the QuerySet.extra keyword with your use case (please check the list of existing tickets first) so that we can enhance the QuerySet API to allow removing extra(). We are no longer improving or fixing bugs for this method.

For example, this use of extra():

>>> qs.extra(
...     select={"val": "select col from sometable where othercol = %s"},
...     select_params=(someparam,),
... )

is equivalent to:

>>> qs.annotate(val=RawSQL("select col from sometable where othercol = %s", (someparam,)))

The main benefit of using RawSQL is that you can set output_field if needed. The main downside is that if you refer to some table alias of the queryset in the raw SQL, then it is possible that Django might change that alias (for example, when the queryset is used as a subquery in yet another query).

警告

You should be very careful whenever you use extra(). Every time you use it, you should escape any parameters that the user can control by using params in order to protect against SQL injection attacks.

You also must not quote placeholders in the SQL string. This example is vulnerable to SQL injection because of the quotes around %s:

SELECT col FROM sometable WHERE othercol = '%s'  # unsafe!

You can read more about how Django's SQL injection protection works.

By definition, these extra lookups may not be portable to different database engines (because you're explicitly writing SQL code) and violate the DRY principle, so you should avoid them if possible.

Specify one or more of params, select, where or tables. None of the arguments is required, but you should use at least one of them.

  • select

    The select argument lets you put extra fields in the SELECT clause. It should be a dictionary mapping attribute names to SQL clauses to use to calculate that attribute.

    実装例:

    Entry.objects.extra(select={"is_recent": "pub_date > '2006-01-01'"})
    

    As a result, each Entry object will have an extra attribute, is_recent, a boolean representing whether the entry's pub_date is greater than Jan. 1, 2006.

    Django inserts the given SQL snippet directly into the SELECT statement, so the resulting SQL of the above example would be something like:

    SELECT blog_entry.*, (pub_date > '2006-01-01') AS is_recent
    FROM blog_entry;
    

    The next example is more advanced; it does a subquery to give each resulting Blog object an entry_count attribute, an integer count of associated Entry objects:

    Blog.objects.extra(
        select={
            "entry_count": "SELECT COUNT(*) FROM blog_entry WHERE blog_entry.blog_id = blog_blog.id"
        },
    )
    

    In this particular case, we're exploiting the fact that the query will already contain the blog_blog table in its FROM clause.

    The resulting SQL of the above example would be:

    SELECT blog_blog.*, (SELECT COUNT(*) FROM blog_entry WHERE blog_entry.blog_id = blog_blog.id) AS entry_count
    FROM blog_blog;
    

    Note that the parentheses required by most database engines around subqueries are not required in Django's select clauses. Also note that some database backends, such as some MySQL versions, don't support subqueries.

    In some rare cases, you might wish to pass parameters to the SQL fragments in extra(select=...). For this purpose, use the select_params parameter.

    This will work, for example:

    Blog.objects.extra(
        select={"a": "%s", "b": "%s"},
        select_params=("one", "two"),
    )
    

    If you need to use a literal %s inside your select string, use the sequence %%s.

  • where / tables

    You can define explicit SQL WHERE clauses — perhaps to perform non-explicit joins — by using where. You can manually add tables to the SQL FROM clause by using tables.

    where and tables both take a list of strings. All where parameters are "AND"ed to any other search criteria.

    実装例:

    Entry.objects.extra(where=["foo='a' OR bar = 'a'", "baz = 'a'"])
    

    ...translates (roughly) into the following SQL:

    SELECT * FROM blog_entry WHERE (foo='a' OR bar='a') AND (baz='a')
    

    Be careful when using the tables parameter if you're specifying tables that are already used in the query. When you add extra tables via the tables parameter, Django assumes you want that table included an extra time, if it is already included. That creates a problem, since the table name will then be given an alias. If a table appears multiple times in an SQL statement, the second and subsequent occurrences must use aliases so the database can tell them apart. If you're referring to the extra table you added in the extra where parameter this is going to cause errors.

    Normally you'll only be adding extra tables that don't already appear in the query. However, if the case outlined above does occur, there are a few solutions. First, see if you can get by without including the extra table and use the one already in the query. If that isn't possible, put your extra() call at the front of the queryset construction so that your table is the first use of that table. Finally, if all else fails, look at the query produced and rewrite your where addition to use the alias given to your extra table. The alias will be the same each time you construct the queryset in the same way, so you can rely upon the alias name to not change.

  • order_by

    If you need to order the resulting queryset using some of the new fields or tables you have included via extra() use the order_by parameter to extra() and pass in a sequence of strings. These strings should either be model fields (as in the normal order_by() method on querysets), of the form table_name.column_name or an alias for a column that you specified in the select parameter to extra().

    例:

    q = Entry.objects.extra(select={"is_recent": "pub_date > '2006-01-01'"})
    q = q.extra(order_by=["-is_recent"])
    

    This would sort all the items for which is_recent is true to the front of the result set (True sorts before False in a descending ordering).

    This shows, by the way, that you can make multiple calls to extra() and it will behave as you expect (adding new constraints each time).

  • params

    The where parameter described above may use standard Python database string placeholders — '%s' to indicate parameters the database engine should automatically quote. The params argument is a list of any extra parameters to be substituted.

    実装例:

    Entry.objects.extra(where=["headline=%s"], params=["Lennon"])
    

    Always use params instead of embedding values directly into where because params will ensure values are quoted correctly according to your particular backend. For example, quotes will be escaped correctly.

    Bad:

    Entry.objects.extra(where=["headline='Lennon'"])
    

    Good:

    Entry.objects.extra(where=["headline=%s"], params=["Lennon"])
    

警告

もしあなたが MySQL でクエリを処理する場合は、複数の型を扱う際に MySQL の暗黙的な型変換が予期しない結果をもたらす場合がある事に注意してください。もし文字列型で定義したカラムに対し、数値型の値で問い合わせた場合、MySQL は比較処理を行う前にテーブル上の全ての値の型を数値型に変換します。例えば 'abc''def' といった値が含まれているテーブルに対して WHERE mycolumn=0 という条件での問い合わせを行うと、両方の行がマッチします。これを防ぐため、クエリの値を利用する前に適切な型キャストを行ってください。

defer()

defer(*fields)

In some complex data-modeling situations, your models might contain a lot of fields, some of which could contain a lot of data (for example, text fields), or require expensive processing to convert them to Python objects. If you are using the results of a queryset in some situation where you don't know if you need those particular fields when you initially fetch the data, you can tell Django not to retrieve them from the database.

This is done by passing the names of the fields to not load to defer():

Entry.objects.defer("headline", "body")

A queryset that has deferred fields will still return model instances. Each deferred field will be retrieved from the database if you access that field (one at a time, not all the deferred fields at once).

注釈

Deferred fields will not lazy-load like this from asynchronous code. Instead, you will get a SynchronousOnlyOperation exception. If you are writing asynchronous code, you should not try to access any fields that you defer().

You can make multiple calls to defer(). Each call adds new fields to the deferred set:

# Defers both the body and headline fields.
Entry.objects.defer("body").filter(rating=5).defer("headline")

The order in which fields are added to the deferred set does not matter. Calling defer() with a field name that has already been deferred is harmless (the field will still be deferred).

You can defer loading of fields in related models (if the related models are loading via select_related()) by using the standard double-underscore notation to separate related fields:

Blog.objects.select_related().defer("entry__headline", "entry__body")

If you want to clear the set of deferred fields, pass None as a parameter to defer():

# Load all fields immediately.
my_queryset.defer(None)

Some fields in a model won't be deferred, even if you ask for them. You can never defer the loading of the primary key. If you are using select_related() to retrieve related models, you shouldn't defer the loading of the field that connects from the primary model to the related one, doing so will result in an error.

Similarly, calling defer() (or its counterpart only()) including an argument from an aggregation (e.g. using the result of annotate()) doesn't make sense: doing so will raise an exception. The aggregated values will always be fetched into the resulting queryset.

注釈

The defer() method (and its cousin, only(), below) are only for advanced use-cases. They provide an optimization for when you have analyzed your queries closely and understand exactly what information you need and have measured that the difference between returning the fields you need and the full set of fields for the model will be significant.

Even if you think you are in the advanced use-case situation, only use defer() when you cannot, at queryset load time, determine if you will need the extra fields or not. If you are frequently loading and using a particular subset of your data, the best choice you can make is to normalize your models and put the non-loaded data into a separate model (and database table). If the columns must stay in the one table for some reason, create a model with Meta.managed = False (see the managed attribute documentation) containing just the fields you normally need to load and use that where you might otherwise call defer(). This makes your code more explicit to the reader, is slightly faster and consumes a little less memory in the Python process.

For example, both of these models use the same underlying database table:

class CommonlyUsedModel(models.Model):
    f1 = models.CharField(max_length=10)

    class Meta:
        managed = False
        db_table = "app_largetable"


class ManagedModel(models.Model):
    f1 = models.CharField(max_length=10)
    f2 = models.CharField(max_length=10)

    class Meta:
        db_table = "app_largetable"


# Two equivalent QuerySets:
CommonlyUsedModel.objects.all()
ManagedModel.objects.defer("f2")

If many fields need to be duplicated in the unmanaged model, it may be best to create an abstract model with the shared fields and then have the unmanaged and managed models inherit from the abstract model.

注釈

When calling save() for instances with deferred fields, only the loaded fields will be saved. See save() for more details.

only()

only(*fields)

The only() method is essentially the opposite of defer(). Only the fields passed into this method and that are not already specified as deferred are loaded immediately when the queryset is evaluated.

If you have a model where almost all the fields need to be deferred, using only() to specify the complementary set of fields can result in simpler code.

Suppose you have a model with fields name, age and biography. The following two querysets are the same, in terms of deferred fields:

Person.objects.defer("age", "biography")
Person.objects.only("name")

Whenever you call only() it replaces the set of fields to load immediately. The method's name is mnemonic: only those fields are loaded immediately; the remainder are deferred. Thus, successive calls to only() result in only the final fields being considered:

# This will defer all fields except the headline.
Entry.objects.only("body", "rating").only("headline")

Since defer() acts incrementally (adding fields to the deferred list), you can combine calls to only() and defer() and things will behave logically:

# Final result is that everything except "headline" is deferred.
Entry.objects.only("headline", "body").defer("body")

# Final result loads headline immediately.
Entry.objects.defer("body").only("headline", "body")

All of the cautions in the note for the defer() documentation apply to only() as well. Use it cautiously and only after exhausting your other options.

Using only() and omitting a field requested using select_related() is an error as well. On the other hand, invoking only() without any arguments, will return every field (including annotations) fetched by the queryset.

As with defer(), you cannot access the non-loaded fields from asynchronous code and expect them to load. Instead, you will get a SynchronousOnlyOperation exception. Ensure that all fields you might access are in your only() call.

注釈

When calling save() for instances with deferred fields, only the loaded fields will be saved. See save() for more details.

注釈

When using defer() after only() the fields in defer() will override only() for fields that are listed in both.

using()

using(alias)

This method is for controlling which database the QuerySet will be evaluated against if you are using more than one database. The only argument this method takes is the alias of a database, as defined in DATABASES.

例えば:

# queries the database with the 'default' alias.
>>> Entry.objects.all()

# queries the database with the 'backup' alias
>>> Entry.objects.using("backup")

select_for_update()

select_for_update(nowait=False, skip_locked=False, of=(), no_key=False)

Returns a queryset that will lock rows until the end of the transaction, generating a SELECT ... FOR UPDATE SQL statement on supported databases.

例:

from django.db import transaction

entries = Entry.objects.select_for_update().filter(author=request.user)
with transaction.atomic():
    for entry in entries:
        ...

When the queryset is evaluated (for entry in entries in this case), all matched entries will be locked until the end of the transaction block, meaning that other transactions will be prevented from changing or acquiring locks on them.

通常、選択された行の1つに対して他のトランザクションが既にロックを取得している場合、ロックが解除されるまでクエリはブロックされます。このような動作が望ましくない場合は、 select_for_update(nowait=True) を呼び出してください。これにより、呼び出しがノンブロッキングになります。競合するロックが既に他のトランザクションによって取得されている場合は、クエリセットが評価される際に DatabaseError が発生します。代わりに select_for_update(skip_locked=True) を使用すれば、ロックされた行を無視することもできます。 nowaitskip_locked``は互いに排他的であり、両方のオプションを有効にして ``select_for_update() を呼び出そうとすると ValueError が発生します。

By default, select_for_update() locks all rows that are selected by the query. For example, rows of related objects specified in select_related() are locked in addition to rows of the queryset's model. If this isn't desired, specify the related objects you want to lock in select_for_update(of=(...)) using the same fields syntax as select_related(). Use the value 'self' to refer to the queryset's model.

Lock parents models in select_for_update(of=(...))

If you want to lock parents models when using multi-table inheritance, you must specify parent link fields (by default <parent_model_name>_ptr) in the of argument. For example:

Restaurant.objects.select_for_update(of=("self", "place_ptr"))

Using select_for_update(of=(...)) with specified fields

If you want to lock models and specify selected fields, e.g. using values(), you must select at least one field from each model in the of argument. Models without selected fields will not be locked.

On PostgreSQL only, you can pass no_key=True in order to acquire a weaker lock, that still allows creating rows that merely reference locked rows (through a foreign key, for example) while the lock is in place. The PostgreSQL documentation has more details about row-level lock modes.

You can't use select_for_update() on nullable relations:

>>> Person.objects.select_related("hometown").select_for_update()
Traceback (most recent call last):
...
django.db.utils.NotSupportedError: FOR UPDATE cannot be applied to the nullable side of an outer join

To avoid that restriction, you can exclude null objects if you don't care about them:

>>> Person.objects.select_related("hometown").select_for_update().exclude(hometown=None)
<QuerySet [<Person: ...)>, ...]>

The postgresql, oracle, and mysql database backends support select_for_update(). However, MariaDB only supports the nowait argument, MariaDB 10.6+ also supports the skip_locked argument, and MySQL 8.0.1+ supports the nowait, skip_locked, and of arguments. The no_key argument is only supported on PostgreSQL.

Passing nowait=True, skip_locked=True, no_key=True, or of to select_for_update() using database backends that do not support these options, such as MySQL, raises a NotSupportedError. This prevents code from unexpectedly blocking.

Evaluating a queryset with select_for_update() in autocommit mode on backends which support SELECT ... FOR UPDATE is a TransactionManagementError error because the rows are not locked in that case. If allowed, this would facilitate data corruption and could easily be caused by calling code that expects to be run in a transaction outside of one.

Using select_for_update() on backends which do not support SELECT ... FOR UPDATE (such as SQLite) will have no effect. SELECT ... FOR UPDATE will not be added to the query, and an error isn't raised if select_for_update() is used in autocommit mode.

警告

Although select_for_update() normally fails in autocommit mode, since TestCase automatically wraps each test in a transaction, calling select_for_update() in a TestCase even outside an atomic() block will (perhaps unexpectedly) pass without raising a TransactionManagementError. To properly test select_for_update() you should use TransactionTestCase.

Certain expressions may not be supported

PostgreSQL doesn't support select_for_update() with Window expressions.

raw()

raw(raw_query, params=(), translations=None, using=None)

Takes a raw SQL query, executes it, and returns a django.db.models.query.RawQuerySet instance. This RawQuerySet instance can be iterated over just like a normal QuerySet to provide object instances.

See the 素の SQL 文の実行 for more information.

警告

raw() always triggers a new query and doesn't account for previous filtering. As such, it should generally be called from the Manager or from a fresh QuerySet instance.

Operators that return new QuerySets

Combined querysets must use the same model.

AND (&)

Combines two QuerySets using the SQL AND operator in a manner similar to chaining filters.

The following are equivalent:

Model.objects.filter(x=1) & Model.objects.filter(y=2)
Model.objects.filter(x=1).filter(y=2)

SQL equivalent:

SELECT ... WHERE x=1 AND y=2

OR (|)

Combines two QuerySets using the SQL OR operator.

The following are equivalent:

Model.objects.filter(x=1) | Model.objects.filter(y=2)
from django.db.models import Q

Model.objects.filter(Q(x=1) | Q(y=2))

SQL equivalent:

SELECT ... WHERE x=1 OR y=2

| is not a commutative operation, as different (though equivalent) queries may be generated.

XOR (^)

New in Django 4.1.

Combines two QuerySets using the SQL XOR operator.

The following are equivalent:

Model.objects.filter(x=1) ^ Model.objects.filter(y=2)
from django.db.models import Q

Model.objects.filter(Q(x=1) ^ Q(y=2))

SQL equivalent:

SELECT ... WHERE x=1 XOR y=2

注釈

XOR is natively supported on MariaDB and MySQL. On other databases, x ^ y ^ ... ^ z is converted to an equivalent:

(x OR y OR ... OR z) AND
1=(
    (CASE WHEN x THEN 1 ELSE 0 END) +
    (CASE WHEN y THEN 1 ELSE 0 END) +
    ...
    (CASE WHEN z THEN 1 ELSE 0 END) +
)

Methods that do not return QuerySets

The following QuerySet methods evaluate the QuerySet and return something other than a QuerySet.

These methods do not use a cache (see キャッシングと QuerySet). Rather, they query the database each time they're called.

Because these methods evaluate the QuerySet, they are blocking calls, and so their main (synchronous) versions cannot be called from asynchronous code. For this reason, each has a corresponding asynchronous version with an a prefix - for example, rather than get(…) you can await aget(…).

There is usually no difference in behavior apart from their asynchronous nature, but any differences are noted below next to each method.

Changed in Django 4.1:

The asynchronous versions of each method, prefixed with a was added.

get()

get(*args, **kwargs)
aget(*args, **kwargs)

Asynchronous version: aget()

Returns the object matching the given lookup parameters, which should be in the format described in Field lookups. You should use lookups that are guaranteed unique, such as the primary key or fields in a unique constraint. For example:

Entry.objects.get(id=1)
Entry.objects.get(Q(blog=blog) & Q(entry_number=1))

If you expect a queryset to already return one row, you can use get() without any arguments to return the object for that row:

Entry.objects.filter(pk=1).get()

If get() doesn't find any object, it raises a Model.DoesNotExist exception:

Entry.objects.get(id=-999)  # raises Entry.DoesNotExist

If get() finds more than one object, it raises a Model.MultipleObjectsReturned exception:

Entry.objects.get(name="A Duplicated Name")  # raises Entry.MultipleObjectsReturned

Both these exception classes are attributes of the model class, and specific to that model. If you want to handle such exceptions from several get() calls for different models, you can use their generic base classes. For example, you can use django.core.exceptions.ObjectDoesNotExist to handle DoesNotExist exceptions from multiple models:

from django.core.exceptions import ObjectDoesNotExist

try:
    blog = Blog.objects.get(id=1)
    entry = Entry.objects.get(blog=blog, entry_number=1)
except ObjectDoesNotExist:
    print("Either the blog or entry doesn't exist.")
Changed in Django 4.1:

aget() method was added.

create()

create(**kwargs)
acreate(**kwargs)

Asynchronous version: acreate()

A convenience method for creating an object and saving it all in one step. Thus:

p = Person.objects.create(first_name="Bruce", last_name="Springsteen")

and:

p = Person(first_name="Bruce", last_name="Springsteen")
p.save(force_insert=True)

are equivalent.

The force_insert parameter is documented elsewhere, but all it means is that a new object will always be created. Normally you won't need to worry about this. However, if your model contains a manual primary key value that you set and if that value already exists in the database, a call to create() will fail with an IntegrityError since primary keys must be unique. Be prepared to handle the exception if you are using manual primary keys.

Changed in Django 4.1:

acreate() method was added.

get_or_create()

get_or_create(defaults=None, **kwargs)
aget_or_create(defaults=None, **kwargs)

Asynchronous version: aget_or_create()

A convenience method for looking up an object with the given kwargs (may be empty if your model has defaults for all fields), creating one if necessary.

(Object, created) のタプルを返します。"Object"は受け取ったものか作られたものです。そして"created"はそのObjectが作られたものかどうかのBooleanです。

This is meant to prevent duplicate objects from being created when requests are made in parallel, and as a shortcut to boilerplatish code. For example:

try:
    obj = Person.objects.get(first_name="John", last_name="Lennon")
except Person.DoesNotExist:
    obj = Person(first_name="John", last_name="Lennon", birthday=date(1940, 10, 9))
    obj.save()

Here, with concurrent requests, multiple attempts to save a Person with the same parameters may be made. To avoid this race condition, the above example can be rewritten using get_or_create() like so:

obj, created = Person.objects.get_or_create(
    first_name="John",
    last_name="Lennon",
    defaults={"birthday": date(1940, 10, 9)},
)

Any keyword arguments passed to get_or_create()except an optional one called defaults — will be used in a get() call. If an object is found, get_or_create() returns a tuple of that object and False.

警告

This method is atomic assuming that the database enforces uniqueness of the keyword arguments (see unique or unique_together). If the fields used in the keyword arguments do not have a uniqueness constraint, concurrent calls to this method may result in multiple rows with the same parameters being inserted.

You can specify more complex conditions for the retrieved object by chaining get_or_create() with filter() and using Q objects. For example, to retrieve Robert or Bob Marley if either exists, and create the latter otherwise:

from django.db.models import Q

obj, created = Person.objects.filter(
    Q(first_name="Bob") | Q(first_name="Robert"),
).get_or_create(last_name="Marley", defaults={"first_name": "Bob"})

If multiple objects are found, get_or_create() raises MultipleObjectsReturned. If an object is not found, get_or_create() will instantiate and save a new object, returning a tuple of the new object and True. The new object will be created roughly according to this algorithm:

params = {k: v for k, v in kwargs.items() if "__" not in k}
params.update({k: v() if callable(v) else v for k, v in defaults.items()})
obj = self.model(**params)
obj.save()

In English, that means start with any non-'defaults' keyword argument that doesn't contain a double underscore (which would indicate a non-exact lookup). Then add the contents of defaults, overriding any keys if necessary, and use the result as the keyword arguments to the model class. If there are any callables in defaults, evaluate them. As hinted at above, this is a simplification of the algorithm that is used, but it contains all the pertinent details. The internal implementation has some more error-checking than this and handles some extra edge-conditions; if you're interested, read the code.

If you have a field named defaults and want to use it as an exact lookup in get_or_create(), use 'defaults__exact', like so:

Foo.objects.get_or_create(defaults__exact="bar", defaults={"defaults": "baz"})

The get_or_create() method has similar error behavior to create() when you're using manually specified primary keys. If an object needs to be created and the key already exists in the database, an IntegrityError will be raised.

Finally, a word on using get_or_create() in Django views. Please make sure to use it only in POST requests unless you have a good reason not to. GET requests shouldn't have any effect on data. Instead, use POST whenever a request to a page has a side effect on your data. For more, see Safe methods in the HTTP spec.

警告

You can use get_or_create() through ManyToManyField attributes and reverse relations. In that case you will restrict the queries inside the context of that relation. That could lead you to some integrity problems if you don't use it consistently.

Being the following models:

class Chapter(models.Model):
    title = models.CharField(max_length=255, unique=True)


class Book(models.Model):
    title = models.CharField(max_length=256)
    chapters = models.ManyToManyField(Chapter)

You can use get_or_create() through Book's chapters field, but it only fetches inside the context of that book:

>>> book = Book.objects.create(title="Ulysses")
>>> book.chapters.get_or_create(title="Telemachus")
(<Chapter: Telemachus>, True)
>>> book.chapters.get_or_create(title="Telemachus")
(<Chapter: Telemachus>, False)
>>> Chapter.objects.create(title="Chapter 1")
<Chapter: Chapter 1>
>>> book.chapters.get_or_create(title="Chapter 1")
# Raises IntegrityError

This is happening because it's trying to get or create "Chapter 1" through the book "Ulysses", but it can't do any of them: the relation can't fetch that chapter because it isn't related to that book, but it can't create it either because title field should be unique.

Changed in Django 4.1:

aget_or_create() method was added.

update_or_create()

update_or_create(defaults=None, **kwargs)
aupdate_or_create(defaults=None, **kwargs)

Asynchronous version: aupdate_or_create()

A convenience method for updating an object with the given kwargs, creating a new one if necessary. The defaults is a dictionary of (field, value) pairs used to update the object. The values in defaults can be callables.

(Object, created) のタプルを返します。"Object"は受け取ったものか更新したものです。そして"created"はそのObjectが更新されたものかどうかのBooleanです。

The update_or_create method tries to fetch an object from database based on the given kwargs. If a match is found, it updates the fields passed in the defaults dictionary.

これは手っ取り早い定型的なコードです。例えば:

defaults = {"first_name": "Bob"}
try:
    obj = Person.objects.get(first_name="John", last_name="Lennon")
    for key, value in defaults.items():
        setattr(obj, key, value)
    obj.save()
except Person.DoesNotExist:
    new_values = {"first_name": "John", "last_name": "Lennon"}
    new_values.update(defaults)
    obj = Person(**new_values)
    obj.save()

This pattern gets quite unwieldy as the number of fields in a model goes up. The above example can be rewritten using update_or_create() like so:

obj, created = Person.objects.update_or_create(
    first_name="John",
    last_name="Lennon",
    defaults={"first_name": "Bob"},
)

For a detailed description of how names passed in kwargs are resolved, see get_or_create().

As described above in get_or_create(), this method is prone to a race-condition which can result in multiple rows being inserted simultaneously if uniqueness is not enforced at the database level.

Like get_or_create() and create(), if you're using manually specified primary keys and an object needs to be created but the key already exists in the database, an IntegrityError is raised.

Changed in Django 4.1:

aupdate_or_create() method was added.

Changed in Django 4.2:

In older versions, update_or_create() didn't specify update_fields when calling Model.save().

bulk_create()

bulk_create(objs, batch_size=None, ignore_conflicts=False, update_conflicts=False, update_fields=None, unique_fields=None)
abulk_create(objs, batch_size=None, ignore_conflicts=False, update_conflicts=False, update_fields=None, unique_fields=None)

Asynchronous version: abulk_create()

This method inserts the provided list of objects into the database in an efficient manner (generally only 1 query, no matter how many objects there are), and returns created objects as a list, in the same order as provided:

>>> objs = Entry.objects.bulk_create(
...     [
...         Entry(headline="This is a test"),
...         Entry(headline="This is only a test"),
...     ]
... )

This has a number of caveats though:

  • The model's save() method will not be called, and the pre_save and post_save signals will not be sent.

  • It does not work with child models in a multi-table inheritance scenario.

  • If the model's primary key is an AutoField, the primary key attribute can only be retrieved on certain databases (currently PostgreSQL, MariaDB 10.5+, and SQLite 3.35+). On other databases, it will not be set.

  • It does not work with many-to-many relationships.

  • It casts objs to a list, which fully evaluates objs if it's a generator. The cast allows inspecting all objects so that any objects with a manually set primary key can be inserted first. If you want to insert objects in batches without evaluating the entire generator at once, you can use this technique as long as the objects don't have any manually set primary keys:

    from itertools import islice
    
    batch_size = 100
    objs = (Entry(headline="Test %s" % i) for i in range(1000))
    while True:
        batch = list(islice(objs, batch_size))
        if not batch:
            break
        Entry.objects.bulk_create(batch, batch_size)
    

The batch_size parameter controls how many objects are created in a single query. The default is to create all objects in one batch, except for SQLite where the default is such that at most 999 variables per query are used.

On databases that support it (all but Oracle), setting the ignore_conflicts parameter to True tells the database to ignore failure to insert any rows that fail constraints such as duplicate unique values.

On databases that support it (all except Oracle and SQLite < 3.24), setting the update_conflicts parameter to True, tells the database to update update_fields when a row insertion fails on conflicts. On PostgreSQL and SQLite, in addition to update_fields, a list of unique_fields that may be in conflict must be provided.

Enabling the ignore_conflicts or update_conflicts parameter disable setting the primary key on each model instance (if the database normally support it).

警告

On MySQL and MariaDB, setting the ignore_conflicts parameter to True turns certain types of errors, other than duplicate key, into warnings. Even with Strict Mode. For example: invalid values or non-nullable violations. See the MySQL documentation and MariaDB documentation for more details.

Changed in Django 4.1:

The update_conflicts, update_fields, and unique_fields parameters were added to support updating fields when a row insertion fails on conflict.

abulk_create() method was added.

bulk_update()

bulk_update(objs, fields, batch_size=None)
abulk_update(objs, fields, batch_size=None)

Asynchronous version: abulk_update()

This method efficiently updates the given fields on the provided model instances, generally with one query, and returns the number of objects updated:

>>> objs = [
...     Entry.objects.create(headline="Entry 1"),
...     Entry.objects.create(headline="Entry 2"),
... ]
>>> objs[0].headline = "This is entry 1"
>>> objs[1].headline = "This is entry 2"
>>> Entry.objects.bulk_update(objs, ["headline"])
2

QuerySet.update() is used to save the changes, so this is more efficient than iterating through the list of models and calling save() on each of them, but it has a few caveats:

  • You cannot update the model's primary key.
  • Each model's save() method isn't called, and the pre_save and post_save signals aren't sent.
  • If updating a large number of columns in a large number of rows, the SQL generated can be very large. Avoid this by specifying a suitable batch_size.
  • Updating fields defined on multi-table inheritance ancestors will incur an extra query per ancestor.
  • When an individual batch contains duplicates, only the first instance in that batch will result in an update.
  • The number of objects updated returned by the function may be fewer than the number of objects passed in. This can be due to duplicate objects passed in which are updated in the same batch or race conditions such that objects are no longer present in the database.

The batch_size parameter controls how many objects are saved in a single query. The default is to update all objects in one batch, except for SQLite and Oracle which have restrictions on the number of variables used in a query.

Changed in Django 4.1:

abulk_update() method was added.

count()

count()
acount()

Asynchronous version: acount()

Returns an integer representing the number of objects in the database matching the QuerySet.

実装例:

# Returns the total number of entries in the database.
Entry.objects.count()

# Returns the number of entries whose headline contains 'Lennon'
Entry.objects.filter(headline__contains="Lennon").count()

A count() call performs a SELECT COUNT(*) behind the scenes, so you should always use count() rather than loading all of the record into Python objects and calling len() on the result (unless you need to load the objects into memory anyway, in which case len() will be faster).

Note that if you want the number of items in a QuerySet and are also retrieving model instances from it (for example, by iterating over it), it's probably more efficient to use len(queryset) which won't cause an extra database query like count() would.

If the queryset has already been fully retrieved, count() will use that length rather than perform an extra database query.

Changed in Django 4.1:

acount() method was added.

in_bulk()

in_bulk(id_list=None, *, field_name='pk')
ain_bulk(id_list=None, *, field_name='pk')

Asynchronous version: ain_bulk()

Takes a list of field values (id_list) and the field_name for those values, and returns a dictionary mapping each value to an instance of the object with the given field value. No django.core.exceptions.ObjectDoesNotExist exceptions will ever be raised by in_bulk; that is, any id_list value not matching any instance will simply be ignored. If id_list isn't provided, all objects in the queryset are returned. field_name must be a unique field or a distinct field (if there's only one field specified in distinct()). field_name defaults to the primary key.

Example:

>>> Blog.objects.in_bulk([1])
{1: <Blog: Beatles Blog>}
>>> Blog.objects.in_bulk([1, 2])
{1: <Blog: Beatles Blog>, 2: <Blog: Cheddar Talk>}
>>> Blog.objects.in_bulk([])
{}
>>> Blog.objects.in_bulk()
{1: <Blog: Beatles Blog>, 2: <Blog: Cheddar Talk>, 3: <Blog: Django Weblog>}
>>> Blog.objects.in_bulk(["beatles_blog"], field_name="slug")
{'beatles_blog': <Blog: Beatles Blog>}
>>> Blog.objects.distinct("name").in_bulk(field_name="name")
{'Beatles Blog': <Blog: Beatles Blog>, 'Cheddar Talk': <Blog: Cheddar Talk>, 'Django Weblog': <Blog: Django Weblog>}

If you pass in_bulk() an empty list, you'll get an empty dictionary.

Changed in Django 4.1:

ain_bulk() method was added.

iterator()

iterator(chunk_size=None)
aiterator(chunk_size=None)

Asynchronous version: aiterator()

Evaluates the QuerySet (by performing the query) and returns an iterator (see PEP 234) over the results, or an asynchronous iterator (see PEP 492) if you call its asynchronous version aiterator.

A QuerySet typically caches its results internally so that repeated evaluations do not result in additional queries. In contrast, iterator() will read results directly, without doing any caching at the QuerySet level (internally, the default iterator calls iterator() and caches the return value). For a QuerySet which returns a large number of objects that you only need to access once, this can result in better performance and a significant reduction in memory.

Note that using iterator() on a QuerySet which has already been evaluated will force it to evaluate again, repeating the query.

iterator() is compatible with previous calls to prefetch_related() as long as chunk_size is given. Larger values will necessitate fewer queries to accomplish the prefetching at the cost of greater memory usage.

注釈

aiterator() is not compatible with previous calls to prefetch_related().

On some databases (e.g. Oracle, SQLite), the maximum number of terms in an SQL IN clause might be limited. Hence values below this limit should be used. (In particular, when prefetching across two or more relations, a chunk_size should be small enough that the anticipated number of results for each prefetched relation still falls below the limit.)

So long as the QuerySet does not prefetch any related objects, providing no value for chunk_size will result in Django using an implicit default of 2000.

Depending on the database backend, query results will either be loaded all at once or streamed from the database using server-side cursors.

Changed in Django 4.1:

Support for prefetching related objects was added to iterator().

aiterator() method was added.

バージョン 4.1 で非推奨: Using iterator() on a queryset that prefetches related objects without providing the chunk_size is deprecated. In Django 5.0, an exception will be raise.

With server-side cursors

Oracle and PostgreSQL use server-side cursors to stream results from the database without loading the entire result set into memory.

The Oracle database driver always uses server-side cursors.

With server-side cursors, the chunk_size parameter specifies the number of results to cache at the database driver level. Fetching bigger chunks diminishes the number of round trips between the database driver and the database, at the expense of memory.

On PostgreSQL, server-side cursors will only be used when the DISABLE_SERVER_SIDE_CURSORS setting is False. Read Transaction pooling and server-side cursors if you're using a connection pooler configured in transaction pooling mode. When server-side cursors are disabled, the behavior is the same as databases that don't support server-side cursors.

Without server-side cursors

MySQL doesn't support streaming results, hence the Python database driver loads the entire result set into memory. The result set is then transformed into Python row objects by the database adapter using the fetchmany() method defined in PEP 249.

SQLite can fetch results in batches using fetchmany(), but since SQLite doesn't provide isolation between queries within a connection, be careful when writing to the table being iterated over. See Isolation when using QuerySet.iterator() for more information.

The chunk_size parameter controls the size of batches Django retrieves from the database driver. Larger batches decrease the overhead of communicating with the database driver at the expense of a slight increase in memory consumption.

So long as the QuerySet does not prefetch any related objects, providing no value for chunk_size will result in Django using an implicit default of 2000, a value derived from a calculation on the psycopg mailing list:

Assuming rows of 10-20 columns with a mix of textual and numeric data, 2000 is going to fetch less than 100KB of data, which seems a good compromise between the number of rows transferred and the data discarded if the loop is exited early.

latest()

latest(*fields)
alatest(*fields)

Asynchronous version: alatest()

Returns the latest object in the table based on the given field(s).

This example returns the latest Entry in the table, according to the pub_date field:

Entry.objects.latest("pub_date")

You can also choose the latest based on several fields. For example, to select the Entry with the earliest expire_date when two entries have the same pub_date:

Entry.objects.latest("pub_date", "-expire_date")

The negative sign in '-expire_date' means to sort expire_date in descending order. Since latest() gets the last result, the Entry with the earliest expire_date is selected.

If your model's Meta specifies get_latest_by, you can omit any arguments to earliest() or latest(). The fields specified in get_latest_by will be used by default.

Like get(), earliest() and latest() raise DoesNotExist if there is no object with the given parameters.

Note that earliest() and latest() exist purely for convenience and readability.

earliest() and latest() may return instances with null dates.

Since ordering is delegated to the database, results on fields that allow null values may be ordered differently if you use different databases. For example, PostgreSQL and MySQL sort null values as if they are higher than non-null values, while SQLite does the opposite.

You may want to filter out null values:

Entry.objects.filter(pub_date__isnull=False).latest("pub_date")
Changed in Django 4.1:

alatest() method was added.

earliest()

earliest(*fields)
aearliest(*fields)

Asynchronous version: aearliest()

Works otherwise like latest() except the direction is changed.

Changed in Django 4.1:

aearliest() method was added.

first()

first()
afirst()

Asynchronous version: afirst()

Returns the first object matched by the queryset, or None if there is no matching object. If the QuerySet has no ordering defined, then the queryset is automatically ordered by the primary key. This can affect aggregation results as described in Interaction with order_by().

実装例:

p = Article.objects.order_by("title", "pub_date").first()

Note that first() is a convenience method, the following code sample is equivalent to the above example:

try:
    p = Article.objects.order_by("title", "pub_date")[0]
except IndexError:
    p = None
Changed in Django 4.1:

afirst() method was added.

last()

last()
alast()

Asynchronous version: alast()

Works like first(), but returns the last object in the queryset.

Changed in Django 4.1:

alast() method was added.

aggregate()

aggregate(*args, **kwargs)
aaggregate(*args, **kwargs)

Asynchronous version: aaggregate()

Returns a dictionary of aggregate values (averages, sums, etc.) calculated over the QuerySet. Each argument to aggregate() specifies a value that will be included in the dictionary that is returned.

The aggregation functions that are provided by Django are described in Aggregation Functions below. Since aggregates are also query expressions, you may combine aggregates with other aggregates or values to create complex aggregates.

Aggregates specified using keyword arguments will use the keyword as the name for the annotation. Anonymous arguments will have a name generated for them based upon the name of the aggregate function and the model field that is being aggregated. Complex aggregates cannot use anonymous arguments and must specify a keyword argument as an alias.

For example, when you are working with blog entries, you may want to know the number of authors that have contributed blog entries:

>>> from django.db.models import Count
>>> Blog.objects.aggregate(Count("entry"))
{'entry__count': 16}

By using a keyword argument to specify the aggregate function, you can control the name of the aggregation value that is returned:

>>> Blog.objects.aggregate(number_of_entries=Count("entry"))
{'number_of_entries': 16}

集計処理についての深い議論については、 アグリゲーションについてのトピックガイド を確認してください。

Changed in Django 4.1:

aaggregate() method was added.

exists()

exists()
aexists()

Asynchronous version: aexists()

Returns True if the QuerySet contains any results, and False if not. This tries to perform the query in the simplest and fastest way possible, but it does execute nearly the same query as a normal QuerySet query.

exists() is useful for searches relating to the existence of any objects in a QuerySet, particularly in the context of a large QuerySet.

To find whether a queryset contains any items:

if some_queryset.exists():
    print("There is at least one object in some_queryset")

Which will be faster than:

if some_queryset:
    print("There is at least one object in some_queryset")

... but not by a large degree (hence needing a large queryset for efficiency gains).

Additionally, if a some_queryset has not yet been evaluated, but you know that it will be at some point, then using some_queryset.exists() will do more overall work (one query for the existence check plus an extra one to later retrieve the results) than using bool(some_queryset), which retrieves the results and then checks if any were returned.

Changed in Django 4.1:

aexists() method was added.

contains()

contains(obj)
acontains(obj)

Asynchronous version: acontains()

Returns True if the QuerySet contains obj, and False if not. This tries to perform the query in the simplest and fastest way possible.

contains() is useful for checking an object membership in a QuerySet, particularly in the context of a large QuerySet.

To check whether a queryset contains a specific item:

if some_queryset.contains(obj):
    print("Entry contained in queryset")

This will be faster than the following which requires evaluating and iterating through the entire queryset:

if obj in some_queryset:
    print("Entry contained in queryset")

Like exists(), if some_queryset has not yet been evaluated, but you know that it will be at some point, then using some_queryset.contains(obj) will make an additional database query, generally resulting in slower overall performance.

Changed in Django 4.1:

acontains() method was added.

update()

update(**kwargs)
aupdate(**kwargs)

Asynchronous version: aupdate()

Performs an SQL update query for the specified fields, and returns the number of rows matched (which may not be equal to the number of rows updated if some rows already have the new value).

For example, to turn comments off for all blog entries published in 2010, you could do this:

>>> Entry.objects.filter(pub_date__year=2010).update(comments_on=False)

(This assumes your Entry model has fields pub_date and comments_on.)

You can update multiple fields — there's no limit on how many. For example, here we update the comments_on and headline fields:

>>> Entry.objects.filter(pub_date__year=2010).update(
...     comments_on=False, headline="This is old"
... )

The update() method is applied instantly, and the only restriction on the QuerySet that is updated is that it can only update columns in the model's main table, not on related models. You can't do this, for example:

>>> Entry.objects.update(blog__name="foo")  # Won't work!

Filtering based on related fields is still possible, though:

>>> Entry.objects.filter(blog__id=1).update(comments_on=True)

You cannot call update() on a QuerySet that has had a slice taken or can otherwise no longer be filtered.

The update() method returns the number of affected rows:

>>> Entry.objects.filter(id=64).update(comments_on=True)
1

>>> Entry.objects.filter(slug="nonexistent-slug").update(comments_on=True)
0

>>> Entry.objects.filter(pub_date__year=2010).update(comments_on=False)
132

If you're just updating a record and don't need to do anything with the model object, the most efficient approach is to call update(), rather than loading the model object into memory. For example, instead of doing this:

e = Entry.objects.get(id=10)
e.comments_on = False
e.save()

...do this:

Entry.objects.filter(id=10).update(comments_on=False)

Using update() also prevents a race condition wherein something might change in your database in the short period of time between loading the object and calling save().

Finally, realize that update() does an update at the SQL level and, thus, does not call any save() methods on your models, nor does it emit the pre_save or post_save signals (which are a consequence of calling Model.save()). If you want to update a bunch of records for a model that has a custom save() method, loop over them and call save(), like this:

for e in Entry.objects.filter(pub_date__year=2010):
    e.comments_on = False
    e.save()
Changed in Django 4.1:

aupdate() method was added.

Ordered queryset

Chaining order_by() with update() is supported only on MariaDB and MySQL, and is ignored for different databases. This is useful for updating a unique field in the order that is specified without conflicts. For example:

Entry.objects.order_by("-number").update(number=F("number") + 1)

注釈

order_by() clause will be ignored if it contains annotations, inherited fields, or lookups spanning relations.

delete()

delete()
adelete()

Asynchronous version: adelete()

Performs an SQL delete query on all rows in the QuerySet and returns the number of objects deleted and a dictionary with the number of deletions per object type.

The delete() is applied instantly. You cannot call delete() on a QuerySet that has had a slice taken or can otherwise no longer be filtered.

For example, to delete all the entries in a particular blog:

>>> b = Blog.objects.get(pk=1)

# Delete all the entries belonging to this Blog.
>>> Entry.objects.filter(blog=b).delete()
(4, {'blog.Entry': 2, 'blog.Entry_authors': 2})

By default, Django's ForeignKey emulates the SQL constraint ON DELETE CASCADE — in other words, any objects with foreign keys pointing at the objects to be deleted will be deleted along with them. For example:

>>> blogs = Blog.objects.all()

# This will delete all Blogs and all of their Entry objects.
>>> blogs.delete()
(5, {'blog.Blog': 1, 'blog.Entry': 2, 'blog.Entry_authors': 2})

このカスケードの動作は、ForeignKey に対する on_delete 属性によってカスタマイズできます。

The delete() method does a bulk delete and does not call any delete() methods on your models. It does, however, emit the pre_delete and post_delete signals for all deleted objects (including cascaded deletions).

Django needs to fetch objects into memory to send signals and handle cascades. However, if there are no cascades and no signals, then Django may take a fast-path and delete objects without fetching into memory. For large deletes this can result in significantly reduced memory usage. The amount of executed queries can be reduced, too.

ForeignKeys which are set to on_delete DO_NOTHING do not prevent taking the fast-path in deletion.

Note that the queries generated in object deletion is an implementation detail subject to change.

Changed in Django 4.1:

adelete() method was added.

as_manager()

classmethod as_manager()

Class method that returns an instance of Manager with a copy of the QuerySet’s methods. See QuerySet のメソッドで、マネージャを生成する for more details.

Note that unlike the other entries in this section, this does not have an asynchronous variant as it does not execute a query.

explain()

explain(format=None, **options)
aexplain(format=None, **options)

Asynchronous version: aexplain()

Returns a string of the QuerySet’s execution plan, which details how the database would execute the query, including any indexes or joins that would be used. Knowing these details may help you improve the performance of slow queries.

For example, when using PostgreSQL:

>>> print(Blog.objects.filter(title="My Blog").explain())
Seq Scan on blog  (cost=0.00..35.50 rows=10 width=12)
  Filter: (title = 'My Blog'::bpchar)

The output differs significantly between databases.

explain() is supported by all built-in database backends except Oracle because an implementation there isn't straightforward.

The format parameter changes the output format from the databases's default, which is usually text-based. PostgreSQL supports 'TEXT', 'JSON', 'YAML', and 'XML' formats. MariaDB and MySQL support 'TEXT' (also called 'TRADITIONAL') and 'JSON' formats. MySQL 8.0.16+ also supports an improved 'TREE' format, which is similar to PostgreSQL's 'TEXT' output and is used by default, if supported.

Some databases accept flags that can return more information about the query. Pass these flags as keyword arguments. For example, when using PostgreSQL:

>>> print(Blog.objects.filter(title="My Blog").explain(verbose=True, analyze=True))
Seq Scan on public.blog  (cost=0.00..35.50 rows=10 width=12) (actual time=0.004..0.004 rows=10 loops=1)
  Output: id, title
  Filter: (blog.title = 'My Blog'::bpchar)
Planning time: 0.064 ms
Execution time: 0.058 ms

On some databases, flags may cause the query to be executed which could have adverse effects on your database. For example, the ANALYZE flag supported by MariaDB, MySQL 8.0.18+, and PostgreSQL could result in changes to data if there are triggers or if a function is called, even for a SELECT query.

Changed in Django 4.1:

aexplain() method was added.

Field ルックアップ

Field lookups are how you specify the meat of an SQL WHERE clause. They're specified as keyword arguments to the QuerySet methods filter(), exclude() and get().

For an introduction, see models and database queries documentation.

Django's built-in lookups are listed below. It is also possible to write custom lookups for model fields.

As a convenience when no lookup type is provided (like in Entry.objects.get(id=14)) the lookup type is assumed to be exact.

exact

Exact match. If the value provided for comparison is None, it will be interpreted as an SQL NULL (see isnull for more details).

例:

Entry.objects.get(id__exact=14)
Entry.objects.get(id__exact=None)

SQL equivalents:

SELECT ... WHERE id = 14;
SELECT ... WHERE id IS NULL;

MySQL comparisons

In MySQL, a database table's "collation" setting determines whether exact comparisons are case-sensitive. This is a database setting, not a Django setting. It's possible to configure your MySQL tables to use case-sensitive comparisons, but some trade-offs are involved. For more information about this, see the collation section in the databases documentation.

iexact

Case-insensitive exact match. If the value provided for comparison is None, it will be interpreted as an SQL NULL (see isnull for more details).

実装例:

Blog.objects.get(name__iexact="beatles blog")
Blog.objects.get(name__iexact=None)

SQL equivalents:

SELECT ... WHERE name ILIKE 'beatles blog';
SELECT ... WHERE name IS NULL;

Note the first query will match 'Beatles Blog', 'beatles blog', 'BeAtLes BLoG', etc.

SQLite users

When using the SQLite backend and non-ASCII strings, bear in mind the database note about string comparisons. SQLite does not do case-insensitive matching for non-ASCII strings.

contains

Case-sensitive containment test.

実装例:

Entry.objects.get(headline__contains="Lennon")

SQL equivalent:

SELECT ... WHERE headline LIKE '%Lennon%';

Note this will match the headline 'Lennon honored today' but not 'lennon honored today'.

SQLite users

SQLite doesn't support case-sensitive LIKE statements; contains acts like icontains for SQLite. See the database note for more information.

icontains

Case-insensitive containment test.

実装例:

Entry.objects.get(headline__icontains="Lennon")

SQL equivalent:

SELECT ... WHERE headline ILIKE '%Lennon%';

SQLite users

When using the SQLite backend and non-ASCII strings, bear in mind the database note about string comparisons.

in

In a given iterable; often a list, tuple, or queryset. It's not a common use case, but strings (being iterables) are accepted.

例:

Entry.objects.filter(id__in=[1, 3, 4])
Entry.objects.filter(headline__in="abc")

SQL equivalents:

SELECT ... WHERE id IN (1, 3, 4);
SELECT ... WHERE headline IN ('a', 'b', 'c');

You can also use a queryset to dynamically evaluate the list of values instead of providing a list of literal values:

inner_qs = Blog.objects.filter(name__contains="Cheddar")
entries = Entry.objects.filter(blog__in=inner_qs)

This queryset will be evaluated as subselect statement:

SELECT ... WHERE blog.id IN (SELECT id FROM ... WHERE NAME LIKE '%Cheddar%')

If you pass in a QuerySet resulting from values() or values_list() as the value to an __in lookup, you need to ensure you are only extracting one field in the result. For example, this will work (filtering on the blog names):

inner_qs = Blog.objects.filter(name__contains="Ch").values("name")
entries = Entry.objects.filter(blog__name__in=inner_qs)

This example will raise an exception, since the inner query is trying to extract two field values, where only one is expected:

# Bad code! Will raise a TypeError.
inner_qs = Blog.objects.filter(name__contains="Ch").values("name", "id")
entries = Entry.objects.filter(blog__name__in=inner_qs)

Performance considerations

Be cautious about using nested queries and understand your database server's performance characteristics (if in doubt, benchmark!). Some database backends, most notably MySQL, don't optimize nested queries very well. It is more efficient, in those cases, to extract a list of values and then pass that into the second query. That is, execute two queries instead of one:

values = Blog.objects.filter(name__contains="Cheddar").values_list("pk", flat=True)
entries = Entry.objects.filter(blog__in=list(values))

Note the list() call around the Blog QuerySet to force execution of the first query. Without it, a nested query would be executed, because QuerySet は遅延評価される.

gt

Greater than.

実装例:

Entry.objects.filter(id__gt=4)

SQL equivalent:

SELECT ... WHERE id > 4;

gte

Greater than or equal to.

lt

Less than.

lte

Less than or equal to.

startswith

Case-sensitive starts-with.

実装例:

Entry.objects.filter(headline__startswith="Lennon")

SQL equivalent:

SELECT ... WHERE headline LIKE 'Lennon%';

SQLite doesn't support case-sensitive LIKE statements; startswith acts like istartswith for SQLite.

istartswith

Case-insensitive starts-with.

実装例:

Entry.objects.filter(headline__istartswith="Lennon")

SQL equivalent:

SELECT ... WHERE headline ILIKE 'Lennon%';

SQLite users

When using the SQLite backend and non-ASCII strings, bear in mind the database note about string comparisons.

endswith

Case-sensitive ends-with.

実装例:

Entry.objects.filter(headline__endswith="Lennon")

SQL equivalent:

SELECT ... WHERE headline LIKE '%Lennon';

SQLite users

SQLite doesn't support case-sensitive LIKE statements; endswith acts like iendswith for SQLite. Refer to the database note documentation for more.

iendswith

Case-insensitive ends-with.

実装例:

Entry.objects.filter(headline__iendswith="Lennon")

SQL equivalent:

SELECT ... WHERE headline ILIKE '%Lennon'

SQLite users

When using the SQLite backend and non-ASCII strings, bear in mind the database note about string comparisons.

range

Range test (inclusive).

実装例:

import datetime

start_date = datetime.date(2005, 1, 1)
end_date = datetime.date(2005, 3, 31)
Entry.objects.filter(pub_date__range=(start_date, end_date))

SQL equivalent:

SELECT ... WHERE pub_date BETWEEN '2005-01-01' and '2005-03-31';

You can use range anywhere you can use BETWEEN in SQL — for dates, numbers and even characters.

警告

Filtering a DateTimeField with dates won't include items on the last day, because the bounds are interpreted as "0am on the given date". If pub_date was a DateTimeField, the above expression would be turned into this SQL:

SELECT ... WHERE pub_date BETWEEN '2005-01-01 00:00:00' and '2005-03-31 00:00:00';

Generally speaking, you can't mix dates and datetimes.

date

For datetime fields, casts the value as date. Allows chaining additional field lookups. Takes a date value.

実装例:

Entry.objects.filter(pub_date__date=datetime.date(2005, 1, 1))
Entry.objects.filter(pub_date__date__gt=datetime.date(2005, 1, 1))

(No equivalent SQL code fragment is included for this lookup because implementation of the relevant query varies among different database engines.)

When USE_TZ is True, fields are converted to the current time zone before filtering. This requires time zone definitions in the database.

year

For date and datetime fields, an exact year match. Allows chaining additional field lookups. Takes an integer year.

実装例:

Entry.objects.filter(pub_date__year=2005)
Entry.objects.filter(pub_date__year__gte=2005)

SQL equivalent:

SELECT ... WHERE pub_date BETWEEN '2005-01-01' AND '2005-12-31';
SELECT ... WHERE pub_date >= '2005-01-01';

(The exact SQL syntax varies for each database engine.)

When USE_TZ is True, datetime fields are converted to the current time zone before filtering. This requires time zone definitions in the database.

iso_year

For date and datetime fields, an exact ISO 8601 week-numbering year match. Allows chaining additional field lookups. Takes an integer year.

実装例:

Entry.objects.filter(pub_date__iso_year=2005)
Entry.objects.filter(pub_date__iso_year__gte=2005)

(The exact SQL syntax varies for each database engine.)

When USE_TZ is True, datetime fields are converted to the current time zone before filtering. This requires time zone definitions in the database.

month

For date and datetime fields, an exact month match. Allows chaining additional field lookups. Takes an integer 1 (January) through 12 (December).

実装例:

Entry.objects.filter(pub_date__month=12)
Entry.objects.filter(pub_date__month__gte=6)

SQL equivalent:

SELECT ... WHERE EXTRACT('month' FROM pub_date) = '12';
SELECT ... WHERE EXTRACT('month' FROM pub_date) >= '6';

(The exact SQL syntax varies for each database engine.)

When USE_TZ is True, datetime fields are converted to the current time zone before filtering. This requires time zone definitions in the database.

day

For date and datetime fields, an exact day match. Allows chaining additional field lookups. Takes an integer day.

実装例:

Entry.objects.filter(pub_date__day=3)
Entry.objects.filter(pub_date__day__gte=3)

SQL equivalent:

SELECT ... WHERE EXTRACT('day' FROM pub_date) = '3';
SELECT ... WHERE EXTRACT('day' FROM pub_date) >= '3';

(The exact SQL syntax varies for each database engine.)

Note this will match any record with a pub_date on the third day of the month, such as January 3, July 3, etc.

When USE_TZ is True, datetime fields are converted to the current time zone before filtering. This requires time zone definitions in the database.

week

For date and datetime fields, return the week number (1-52 or 53) according to ISO-8601, i.e., weeks start on a Monday and the first week contains the year's first Thursday.

実装例:

Entry.objects.filter(pub_date__week=52)
Entry.objects.filter(pub_date__week__gte=32, pub_date__week__lte=38)

(No equivalent SQL code fragment is included for this lookup because implementation of the relevant query varies among different database engines.)

When USE_TZ is True, datetime fields are converted to the current time zone before filtering. This requires time zone definitions in the database.

week_day

For date and datetime fields, a 'day of the week' match. Allows chaining additional field lookups.

Takes an integer value representing the day of week from 1 (Sunday) to 7 (Saturday).

実装例:

Entry.objects.filter(pub_date__week_day=2)
Entry.objects.filter(pub_date__week_day__gte=2)

(No equivalent SQL code fragment is included for this lookup because implementation of the relevant query varies among different database engines.)

Note this will match any record with a pub_date that falls on a Monday (day 2 of the week), regardless of the month or year in which it occurs. Week days are indexed with day 1 being Sunday and day 7 being Saturday.

When USE_TZ is True, datetime fields are converted to the current time zone before filtering. This requires time zone definitions in the database.

iso_week_day

For date and datetime fields, an exact ISO 8601 day of the week match. Allows chaining additional field lookups.

Takes an integer value representing the day of the week from 1 (Monday) to 7 (Sunday).

実装例:

Entry.objects.filter(pub_date__iso_week_day=1)
Entry.objects.filter(pub_date__iso_week_day__gte=1)

(No equivalent SQL code fragment is included for this lookup because implementation of the relevant query varies among different database engines.)

Note this will match any record with a pub_date that falls on a Monday (day 1 of the week), regardless of the month or year in which it occurs. Week days are indexed with day 1 being Monday and day 7 being Sunday.

When USE_TZ is True, datetime fields are converted to the current time zone before filtering. This requires time zone definitions in the database.

quarter

For date and datetime fields, a 'quarter of the year' match. Allows chaining additional field lookups. Takes an integer value between 1 and 4 representing the quarter of the year.

Example to retrieve entries in the second quarter (April 1 to June 30):

Entry.objects.filter(pub_date__quarter=2)

(No equivalent SQL code fragment is included for this lookup because implementation of the relevant query varies among different database engines.)

When USE_TZ is True, datetime fields are converted to the current time zone before filtering. This requires time zone definitions in the database.

time

For datetime fields, casts the value as time. Allows chaining additional field lookups. Takes a datetime.time value.

実装例:

Entry.objects.filter(pub_date__time=datetime.time(14, 30))
Entry.objects.filter(pub_date__time__range=(datetime.time(8), datetime.time(17)))

(No equivalent SQL code fragment is included for this lookup because implementation of the relevant query varies among different database engines.)

When USE_TZ is True, fields are converted to the current time zone before filtering. This requires time zone definitions in the database.

hour

For datetime and time fields, an exact hour match. Allows chaining additional field lookups. Takes an integer between 0 and 23.

実装例:

Event.objects.filter(timestamp__hour=23)
Event.objects.filter(time__hour=5)
Event.objects.filter(timestamp__hour__gte=12)

SQL equivalent:

SELECT ... WHERE EXTRACT('hour' FROM timestamp) = '23';
SELECT ... WHERE EXTRACT('hour' FROM time) = '5';
SELECT ... WHERE EXTRACT('hour' FROM timestamp) >= '12';

(The exact SQL syntax varies for each database engine.)

When USE_TZ is True, datetime fields are converted to the current time zone before filtering. This requires time zone definitions in the database.

minute

For datetime and time fields, an exact minute match. Allows chaining additional field lookups. Takes an integer between 0 and 59.

実装例:

Event.objects.filter(timestamp__minute=29)
Event.objects.filter(time__minute=46)
Event.objects.filter(timestamp__minute__gte=29)

SQL equivalent:

SELECT ... WHERE EXTRACT('minute' FROM timestamp) = '29';
SELECT ... WHERE EXTRACT('minute' FROM time) = '46';
SELECT ... WHERE EXTRACT('minute' FROM timestamp) >= '29';

(The exact SQL syntax varies for each database engine.)

When USE_TZ is True, datetime fields are converted to the current time zone before filtering. This requires time zone definitions in the database.

second

For datetime and time fields, an exact second match. Allows chaining additional field lookups. Takes an integer between 0 and 59.

実装例:

Event.objects.filter(timestamp__second=31)
Event.objects.filter(time__second=2)
Event.objects.filter(timestamp__second__gte=31)

SQL equivalent:

SELECT ... WHERE EXTRACT('second' FROM timestamp) = '31';
SELECT ... WHERE EXTRACT('second' FROM time) = '2';
SELECT ... WHERE EXTRACT('second' FROM timestamp) >= '31';

(The exact SQL syntax varies for each database engine.)

When USE_TZ is True, datetime fields are converted to the current time zone before filtering. This requires time zone definitions in the database.

isnull

Takes either True or False, which correspond to SQL queries of IS NULL and IS NOT NULL, respectively.

実装例:

Entry.objects.filter(pub_date__isnull=True)

SQL equivalent:

SELECT ... WHERE pub_date IS NULL;

regex

Case-sensitive regular expression match.

The regular expression syntax is that of the database backend in use. In the case of SQLite, which has no built in regular expression support, this feature is provided by a (Python) user-defined REGEXP function, and the regular expression syntax is therefore that of Python's re module.

実装例:

Entry.objects.get(title__regex=r"^(An?|The) +")

SQL equivalents:

SELECT ... WHERE title REGEXP BINARY '^(An?|The) +'; -- MySQL

SELECT ... WHERE REGEXP_LIKE(title, '^(An?|The) +', 'c'); -- Oracle

SELECT ... WHERE title ~ '^(An?|The) +'; -- PostgreSQL

SELECT ... WHERE title REGEXP '^(An?|The) +'; -- SQLite

Using raw strings (e.g., r'foo' instead of 'foo') for passing in the regular expression syntax is recommended.

iregex

Case-insensitive regular expression match.

実装例:

Entry.objects.get(title__iregex=r"^(an?|the) +")

SQL equivalents:

SELECT ... WHERE title REGEXP '^(an?|the) +'; -- MySQL

SELECT ... WHERE REGEXP_LIKE(title, '^(an?|the) +', 'i'); -- Oracle

SELECT ... WHERE title ~* '^(an?|the) +'; -- PostgreSQL

SELECT ... WHERE title REGEXP '(?i)^(an?|the) +'; -- SQLite

Aggregation functions

Django provides the following aggregation functions in the django.db.models module. For details on how to use these aggregate functions, see the topic guide on aggregation. See the Aggregate documentation to learn how to create your aggregates.

警告

SQLite can't handle aggregation on date/time fields out of the box. This is because there are no native date/time fields in SQLite and Django currently emulates these features using a text field. Attempts to use aggregation on date/time fields in SQLite will raise NotSupportedError.

Empty querysets or groups

Aggregation functions return None when used with an empty QuerySet or group. For example, the Sum aggregation function returns None instead of 0 if the QuerySet contains no entries or for any empty group in a non-empty QuerySet. To return another value instead, define the default argument. Count is an exception to this behavior; it returns 0 if the QuerySet is empty since Count does not support the default argument.

All aggregates have the following parameters in common:

expressions

Strings that reference fields on the model, transforms of the field, or query expressions.

output_field

An optional argument that represents the model field of the return value

注釈

When combining multiple field types, Django can only determine the output_field if all fields are of the same type. Otherwise, you must provide the output_field yourself.

filter

An optional Q object that's used to filter the rows that are aggregated.

See Conditional aggregation and Filtering on annotations for example usage.

default

An optional argument that allows specifying a value to use as a default value when the queryset (or grouping) contains no entries.

**extra

Keyword arguments that can provide extra context for the SQL generated by the aggregate.

Avg

class Avg(expression, output_field=None, distinct=False, filter=None, default=None, **extra)

Returns the mean value of the given expression, which must be numeric unless you specify a different output_field.

  • Default alias: <field>__avg
  • Return type: float if input is int, otherwise same as input field, or output_field if supplied. If the queryset or grouping is empty, default is returned.
distinct

Optional. If distinct=True, Avg returns the mean value of unique values. This is the SQL equivalent of AVG(DISTINCT <field>). The default value is False.

Count

class Count(expression, distinct=False, filter=None, **extra)

Returns the number of objects that are related through the provided expression. Count('*') is equivalent to the SQL COUNT(*) expression.

  • Default alias: <field>__count
  • Return type: int
distinct

Optional. If distinct=True, the count will only include unique instances. This is the SQL equivalent of COUNT(DISTINCT <field>). The default value is False.

注釈

The default argument is not supported.

Max

class Max(expression, output_field=None, filter=None, default=None, **extra)

Returns the maximum value of the given expression.

  • Default alias: <field>__max
  • Return type: same as input field, or output_field if supplied. If the queryset or grouping is empty, default is returned.

Min

class Min(expression, output_field=None, filter=None, default=None, **extra)

Returns the minimum value of the given expression.

  • Default alias: <field>__min
  • Return type: same as input field, or output_field if supplied. If the queryset or grouping is empty, default is returned.

StdDev

class StdDev(expression, output_field=None, sample=False, filter=None, default=None, **extra)

Returns the standard deviation of the data in the provided expression.

  • Default alias: <field>__stddev
  • Return type: float if input is int, otherwise same as input field, or output_field if supplied. If the queryset or grouping is empty, default is returned.
sample

Optional. By default, StdDev returns the population standard deviation. However, if sample=True, the return value will be the sample standard deviation.

Sum

class Sum(expression, output_field=None, distinct=False, filter=None, default=None, **extra)

Computes the sum of all values of the given expression.

  • Default alias: <field>__sum
  • Return type: same as input field, or output_field if supplied. If the queryset or grouping is empty, default is returned.
distinct

Optional. If distinct=True, Sum returns the sum of unique values. This is the SQL equivalent of SUM(DISTINCT <field>). The default value is False.

Variance

class Variance(expression, output_field=None, sample=False, filter=None, default=None, **extra)

Returns the variance of the data in the provided expression.

  • Default alias: <field>__variance
  • Return type: float if input is int, otherwise same as input field, or output_field if supplied. If the queryset or grouping is empty, default is returned.
sample

Optional. By default, Variance returns the population variance. However, if sample=True, the return value will be the sample variance.

Back to Top