Django--Managers

Django--Managers



Manager

概念:

1.Manager是Django中的数据模型,可以通过manager进行对数据库的查询操作。可以看其结构它本身是一个空的类,其主要的功能来自于BaseManager,QuerySet。

2.在Django中的模型类中每个模型类中至少存在一个Manager。

3.默认情况下Django为每一个模型类添加了Manager其名称为objects。为了区别每个模型类你可以对每个模型类进行重新命名。方法如下,就是将对manage的类属性进行重新的赋值。

class Manager(BaseManager.from_queryset(QuerySet)):
    pass

class BaseManager

class BaseManager:
    # To retain order, track each time a Manager instance is created.
    creation_counter = 0

    # Set to True for the ‘objects‘ managers that are automatically created.
    auto_created = False

    #: If set to True the manager will be serialized into migrations and will
    #: thus be available in e.g. RunPython operations.
    use_in_migrations = False

    def __new__(cls, *args, **kwargs):
        # Capture the arguments to make returning them trivial.
        obj = super().__new__(cls)
        obj._constructor_args = (args, kwargs)
        return obj

    def __init__(self):
        super().__init__()
        self._set_creation_counter()
        self.model = None
        self.name = None
        self._db = None
        self._hints = {}

    def __str__(self):
        """Return "app_label.model_label.manager_name"."""
        return ‘%s.%s‘ % (self.model._meta.label, self.name)

    def deconstruct(self):
        """
        Return a 5-tuple of the form (as_manager (True), manager_class,
        queryset_class, args, kwargs).

        Raise a ValueError if the manager is dynamically generated.
        """
        qs_class = self._queryset_class
        if getattr(self, ‘_built_with_as_manager‘, False):
            # using MyQuerySet.as_manager()
            return (
                True,  # as_manager
                None,  # manager_class
                ‘%s.%s‘ % (qs_class.__module__, qs_class.__name__),  # qs_class
                None,  # args
                None,  # kwargs
            )
        else:
            module_name = self.__module__
            name = self.__class__.__name__
            # Make sure it‘s actually there and not an inner class
            module = import_module(module_name)
            if not hasattr(module, name):
                raise ValueError(
                    "Could not find manager %s in %s.\n"
                    "Please note that you need to inherit from managers you "
                    "dynamically generated with ‘from_queryset()‘."
                    % (name, module_name)
                )
            return (
                False,  # as_manager
                ‘%s.%s‘ % (module_name, name),  # manager_class
                None,  # qs_class
                self._constructor_args[0],  # args
                self._constructor_args[1],  # kwargs
            )

    def check(self, **kwargs):
        return []

    @classmethod
    def _get_queryset_methods(cls, queryset_class):
        def create_method(name, method):
            def manager_method(self, *args, **kwargs):
                return getattr(self.get_queryset(), name)(*args, **kwargs)
            manager_method.__name__ = method.__name__
            manager_method.__doc__ = method.__doc__
            return manager_method

        new_methods = {}
        for name, method in inspect.getmembers(queryset_class, predicate=inspect.isfunction):
            # Only copy missing methods.
            if hasattr(cls, name):
                continue
            # Only copy public methods or methods with the attribute `queryset_only=False`.
            queryset_only = getattr(method, ‘queryset_only‘, None)
            if queryset_only or (queryset_only is None and name.startswith(‘_‘)):
                continue
            # Copy the method onto the manager.
            new_methods[name] = create_method(name, method)
        return new_methods

    @classmethod
    def from_queryset(cls, queryset_class, class_name=None):
        if class_name is None:
            class_name = ‘%sFrom%s‘ % (cls.__name__, queryset_class.__name__)
        class_dict = {
            ‘_queryset_class‘: queryset_class,
        }
        class_dict.update(cls._get_queryset_methods(queryset_class))
        return type(class_name, (cls,), class_dict)

    def contribute_to_class(self, model, name):
        if not self.name:
            self.name = name
        self.model = model

        setattr(model, name, ManagerDescriptor(self))

        model._meta.add_manager(self)

    def _set_creation_counter(self):
        """
        Set the creation counter value for this instance and increment the
        class-level copy.
        """
        self.creation_counter = BaseManager.creation_counter
        BaseManager.creation_counter += 1

    def db_manager(self, using=None, hints=None):
        obj = copy.copy(self)
        obj._db = using or self._db
        obj._hints = hints or self._hints
        return obj

    @property
    def db(self):
        return self._db or router.db_for_read(self.model, **self._hints)

    #######################
    # PROXIES TO QUERYSET #
    #######################

    def get_queryset(self):
        """
        Return a new QuerySet object. Subclasses can override this method to
        customize the behavior of the Manager.
        """
        return self._queryset_class(model=self.model, using=self._db, hints=self._hints)

    def all(self):
        # We can‘t proxy this method through the `QuerySet` like we do for the
        # rest of the `QuerySet` methods. This is because `QuerySet.all()`
        # works by creating a "copy" of the current queryset and in making said
        # copy, all the cached `prefetch_related` lookups are lost. See the
        # implementation of `RelatedManager.get_queryset()` for a better
        # understanding of how this comes into play.
        return self.get_queryset()

    def __eq__(self, other):
        return (
            isinstance(other, self.__class__) and
            self._constructor_args == other._constructor_args
        )

    def __hash__(self):
        return id(self)

class QuerySet

   1 class QuerySet:
   2     """Represent a lazy database lookup for a set of objects."""
   3
   4     def __init__(self, model=None, query=None, using=None, hints=None):
   5         self.model = model
   6         self._db = using
   7         self._hints = hints or {}
   8         self.query = query or sql.Query(self.model)
   9         self._result_cache = None
  10         self._sticky_filter = False
  11         self._for_write = False
  12         self._prefetch_related_lookups = ()
  13         self._prefetch_done = False
  14         self._known_related_objects = {}  # {rel_field: {pk: rel_obj}}
  15         self._iterable_class = ModelIterable
  16         self._fields = None
  17
  18     def as_manager(cls):
  19         # Address the circular dependency between `Queryset` and `Manager`.
  20         from django.db.models.manager import Manager
  21         manager = Manager.from_queryset(cls)()
  22         manager._built_with_as_manager = True
  23         return manager
  24     as_manager.queryset_only = True
  25     as_manager = classmethod(as_manager)
  26
  27     ########################
  28     # PYTHON MAGIC METHODS #
  29     ########################
  30
  31     def __deepcopy__(self, memo):
  32         """Don‘t populate the QuerySet‘s cache."""
  33         obj = self.__class__()
  34         for k, v in self.__dict__.items():
  35             if k == ‘_result_cache‘:
  36                 obj.__dict__[k] = None
  37             else:
  38                 obj.__dict__[k] = copy.deepcopy(v, memo)
  39         return obj
  40
  41     def __getstate__(self):
  42         # Force the cache to be fully populated.
  43         self._fetch_all()
  44         obj_dict = self.__dict__.copy()
  45         obj_dict[DJANGO_VERSION_PICKLE_KEY] = get_version()
  46         return obj_dict
  47
  48     def __setstate__(self, state):
  49         msg = None
  50         pickled_version = state.get(DJANGO_VERSION_PICKLE_KEY)
  51         if pickled_version:
  52             current_version = get_version()
  53             if current_version != pickled_version:
  54                 msg = (
  55                     "Pickled queryset instance‘s Django version %s does not "
  56                     "match the current version %s." % (pickled_version, current_version)
  57                 )
  58         else:
  59             msg = "Pickled queryset instance‘s Django version is not specified."
  60
  61         if msg:
  62             warnings.warn(msg, RuntimeWarning, stacklevel=2)
  63
  64         self.__dict__.update(state)
  65
  66     def __repr__(self):
  67         data = list(self[:REPR_OUTPUT_SIZE + 1])
  68         if len(data) > REPR_OUTPUT_SIZE:
  69             data[-1] = "...(remaining elements truncated)..."
  70         return ‘<%s %r>‘ % (self.__class__.__name__, data)
  71
  72     def __len__(self):
  73         self._fetch_all()
  74         return len(self._result_cache)
  75
  76     def __iter__(self):
  77         """
  78         The queryset iterator protocol uses three nested iterators in the
  79         default case:
  80             1. sql.compiler:execute_sql()
  81                - Returns 100 rows at time (constants.GET_ITERATOR_CHUNK_SIZE)
  82                  using cursor.fetchmany(). This part is responsible for
  83                  doing some column masking, and returning the rows in chunks.
  84             2. sql.compiler.results_iter()
  85                - Returns one row at time. At this point the rows are still just
  86                  tuples. In some cases the return values are converted to
  87                  Python values at this location.
  88             3. self.iterator()
  89                - Responsible for turning the rows into model objects.
  90         """
  91         self._fetch_all()
  92         return iter(self._result_cache)
  93
  94     def __bool__(self):
  95         self._fetch_all()
  96         return bool(self._result_cache)
  97
  98     def __getitem__(self, k):
  99         """Retrieve an item or slice from the set of results."""
 100         if not isinstance(k, (int, slice)):
 101             raise TypeError
 102         assert ((not isinstance(k, slice) and (k >= 0)) or
 103                 (isinstance(k, slice) and (k.start is None or k.start >= 0) and
 104                  (k.stop is None or k.stop >= 0))),  105             "Negative indexing is not supported."
 106
 107         if self._result_cache is not None:
 108             return self._result_cache[k]
 109
 110         if isinstance(k, slice):
 111             qs = self._chain()
 112             if k.start is not None:
 113                 start = int(k.start)
 114             else:
 115                 start = None
 116             if k.stop is not None:
 117                 stop = int(k.stop)
 118             else:
 119                 stop = None
 120             qs.query.set_limits(start, stop)
 121             return list(qs)[::k.step] if k.step else qs
 122
 123         qs = self._chain()
 124         qs.query.set_limits(k, k + 1)
 125         qs._fetch_all()
 126         return qs._result_cache[0]
 127
 128     def __and__(self, other):
 129         self._merge_sanity_check(other)
 130         if isinstance(other, EmptyQuerySet):
 131             return other
 132         if isinstance(self, EmptyQuerySet):
 133             return self
 134         combined = self._chain()
 135         combined._merge_known_related_objects(other)
 136         combined.query.combine(other.query, sql.AND)
 137         return combined
 138
 139     def __or__(self, other):
 140         self._merge_sanity_check(other)
 141         if isinstance(self, EmptyQuerySet):
 142             return other
 143         if isinstance(other, EmptyQuerySet):
 144             return self
 145         combined = self._chain()
 146         combined._merge_known_related_objects(other)
 147         combined.query.combine(other.query, sql.OR)
 148         return combined
 149
 150     ####################################
 151     # METHODS THAT DO DATABASE QUERIES #
 152     ####################################
 153
 154     def _iterator(self, use_chunked_fetch, chunk_size):
 155         yield from self._iterable_class(self, chunked_fetch=use_chunked_fetch, chunk_size=chunk_size)
 156
 157     def iterator(self, chunk_size=2000):
 158         """
 159         An iterator over the results from applying this QuerySet to the
 160         database.
 161         """
 162         if chunk_size <= 0:
 163             raise ValueError(‘Chunk size must be strictly positive.‘)
 164         use_chunked_fetch = not connections[self.db].settings_dict.get(‘DISABLE_SERVER_SIDE_CURSORS‘)
 165         return self._iterator(use_chunked_fetch, chunk_size)
 166
 167     def aggregate(self, *args, **kwargs):
 168         """
 169         Return a dictionary containing the calculations (aggregation)
 170         over the current queryset.
 171
 172         If args is present the expression is passed as a kwarg using
 173         the Aggregate object‘s default alias.
 174         """
 175         if self.query.distinct_fields:
 176             raise NotImplementedError("aggregate() + distinct(fields) not implemented.")
 177         self._validate_values_are_expressions(args + tuple(kwargs.values()), method_name=‘aggregate‘)
 178         for arg in args:
 179             # The default_alias property raises TypeError if default_alias
 180             # can‘t be set automatically or AttributeError if it isn‘t an
 181             # attribute.
 182             try:
 183                 arg.default_alias
 184             except (AttributeError, TypeError):
 185                 raise TypeError("Complex aggregates require an alias")
 186             kwargs[arg.default_alias] = arg
 187
 188         query = self.query.chain()
 189         for (alias, aggregate_expr) in kwargs.items():
 190             query.add_annotation(aggregate_expr, alias, is_summary=True)
 191             if not query.annotations[alias].contains_aggregate:
 192                 raise TypeError("%s is not an aggregate expression" % alias)
 193         return query.get_aggregation(self.db, kwargs)
 194
 195     def count(self):
 196         """
 197         Perform a SELECT COUNT() and return the number of records as an
 198         integer.
 199
 200         If the QuerySet is already fully cached, return the length of the
 201         cached results set to avoid multiple SELECT COUNT(*) calls.
 202         """
 203         if self._result_cache is not None:
 204             return len(self._result_cache)
 205
 206         return self.query.get_count(using=self.db)
 207
 208     def get(self, *args, **kwargs):
 209         """
 210         Perform the query and return a single object matching the given
 211         keyword arguments.
 212         """
 213         clone = self.filter(*args, **kwargs)
 214         if self.query.can_filter() and not self.query.distinct_fields:
 215             clone = clone.order_by()
 216         num = len(clone)
 217         if num == 1:
 218             return clone._result_cache[0]
 219         if not num:
 220             raise self.model.DoesNotExist(
 221                 "%s matching query does not exist." %
 222                 self.model._meta.object_name
 223             )
 224         raise self.model.MultipleObjectsReturned(
 225             "get() returned more than one %s -- it returned %s!" %
 226             (self.model._meta.object_name, num)
 227         )
 228
 229     def create(self, **kwargs):
 230         """
 231         Create a new object with the given kwargs, saving it to the database
 232         and returning the created object.
 233         """
 234         obj = self.model(**kwargs)
 235         self._for_write = True
 236         obj.save(force_insert=True, using=self.db)
 237         return obj
 238
 239     def _populate_pk_values(self, objs):
 240         for obj in objs:
 241             if obj.pk is None:
 242                 obj.pk = obj._meta.pk.get_pk_value_on_save(obj)
 243
 244     def bulk_create(self, objs, batch_size=None):
 245         """
 246         Insert each of the instances into the database. Do *not* call
 247         save() on each of the instances, do not send any pre/post_save
 248         signals, and do not set the primary key attribute if it is an
 249         autoincrement field (except if features.can_return_ids_from_bulk_insert=True).
 250         Multi-table models are not supported.
 251         """
 252         # When you bulk insert you don‘t get the primary keys back (if it‘s an
 253         # autoincrement, except if can_return_ids_from_bulk_insert=True), so
 254         # you can‘t insert into the child tables which references this. There
 255         # are two workarounds:
 256         # 1) This could be implemented if you didn‘t have an autoincrement pk
 257         # 2) You could do it by doing O(n) normal inserts into the parent
 258         #    tables to get the primary keys back and then doing a single bulk
 259         #    insert into the childmost table.
 260         # We currently set the primary keys on the objects when using
 261         # PostgreSQL via the RETURNING ID clause. It should be possible for
 262         # Oracle as well, but the semantics for  extracting the primary keys is
 263         # trickier so it‘s not done yet.
 264         assert batch_size is None or batch_size > 0
 265         # Check that the parents share the same concrete model with the our
 266         # model to detect the inheritance pattern ConcreteGrandParent ->
 267         # MultiTableParent -> ProxyChild. Simply checking self.model._meta.proxy
 268         # would not identify that case as involving multiple tables.
 269         for parent in self.model._meta.get_parent_list():
 270             if parent._meta.concrete_model is not self.model._meta.concrete_model:
 271                 raise ValueError("Can‘t bulk create a multi-table inherited model")
 272         if not objs:
 273             return objs
 274         self._for_write = True
 275         connection = connections[self.db]
 276         fields = self.model._meta.concrete_fields
 277         objs = list(objs)
 278         self._populate_pk_values(objs)
 279         with transaction.atomic(using=self.db, savepoint=False):
 280             objs_with_pk, objs_without_pk = partition(lambda o: o.pk is None, objs)
 281             if objs_with_pk:
 282                 self._batched_insert(objs_with_pk, fields, batch_size)
 283             if objs_without_pk:
 284                 fields = [f for f in fields if not isinstance(f, AutoField)]
 285                 ids = self._batched_insert(objs_without_pk, fields, batch_size)
 286                 if connection.features.can_return_ids_from_bulk_insert:
 287                     assert len(ids) == len(objs_without_pk)
 288                 for obj_without_pk, pk in zip(objs_without_pk, ids):
 289                     obj_without_pk.pk = pk
 290                     obj_without_pk._state.adding = False
 291                     obj_without_pk._state.db = self.db
 292
 293         return objs
 294
 295     def get_or_create(self, defaults=None, **kwargs):
 296         """
 297         Look up an object with the given kwargs, creating one if necessary.
 298         Return a tuple of (object, created), where created is a boolean
 299         specifying whether an object was created.
 300         """
 301         lookup, params = self._extract_model_params(defaults, **kwargs)
 302         # The get() needs to be targeted at the write database in order
 303         # to avoid potential transaction consistency problems.
 304         self._for_write = True
 305         try:
 306             return self.get(**lookup), False
 307         except self.model.DoesNotExist:
 308             return self._create_object_from_params(lookup, params)
 309
 310     def update_or_create(self, defaults=None, **kwargs):
 311         """
 312         Look up an object with the given kwargs, updating one with defaults
 313         if it exists, otherwise create a new one.
 314         Return a tuple (object, created), where created is a boolean
 315         specifying whether an object was created.
 316         """
 317         defaults = defaults or {}
 318         lookup, params = self._extract_model_params(defaults, **kwargs)
 319         self._for_write = True
 320         with transaction.atomic(using=self.db):
 321             try:
 322                 obj = self.select_for_update().get(**lookup)
 323             except self.model.DoesNotExist:
 324                 obj, created = self._create_object_from_params(lookup, params)
 325                 if created:
 326                     return obj, created
 327             for k, v in defaults.items():
 328                 setattr(obj, k, v() if callable(v) else v)
 329             obj.save(using=self.db)
 330         return obj, False
 331
 332     def _create_object_from_params(self, lookup, params):
 333         """
 334         Try to create an object using passed params. Used by get_or_create()
 335         and update_or_create().
 336         """
 337         try:
 338             with transaction.atomic(using=self.db):
 339                 params = {k: v() if callable(v) else v for k, v in params.items()}
 340                 obj = self.create(**params)
 341             return obj, True
 342         except IntegrityError as e:
 343             try:
 344                 return self.get(**lookup), False
 345             except self.model.DoesNotExist:
 346                 pass
 347             raise e
 348
 349     def _extract_model_params(self, defaults, **kwargs):
 350         """
 351         Prepare `lookup` (kwargs that are valid model attributes), `params`
 352         (for creating a model instance) based on given kwargs; for use by
 353         get_or_create() and update_or_create().
 354         """
 355         defaults = defaults or {}
 356         lookup = kwargs.copy()
 357         for f in self.model._meta.fields:
 358             if f.attname in lookup:
 359                 lookup[f.name] = lookup.pop(f.attname)
 360         params = {k: v for k, v in kwargs.items() if LOOKUP_SEP not in k}
 361         params.update(defaults)
 362         property_names = self.model._meta._property_names
 363         invalid_params = []
 364         for param in params:
 365             try:
 366                 self.model._meta.get_field(param)
 367             except exceptions.FieldDoesNotExist:
 368                 # It‘s okay to use a model‘s property if it has a setter.
 369                 if not (param in property_names and getattr(self.model, param).fset):
 370                     invalid_params.append(param)
 371         if invalid_params:
 372             raise exceptions.FieldError(
 373                 "Invalid field name(s) for model %s: ‘%s‘." % (
 374                     self.model._meta.object_name,
 375                     "‘, ‘".join(sorted(invalid_params)),
 376                 ))
 377         return lookup, params
 378
 379     def _earliest_or_latest(self, *fields, field_name=None):
 380         """
 381         Return the latest object, according to the model‘s
 382         ‘get_latest_by‘ option or optional given field_name.
 383         """
 384         if fields and field_name is not None:
 385             raise ValueError(‘Cannot use both positional arguments and the field_name keyword argument.‘)
 386
 387         order_by = None
 388         if field_name is not None:
 389             warnings.warn(
 390                 ‘The field_name keyword argument to earliest() and latest() ‘
 391                 ‘is deprecated in favor of passing positional arguments.‘,
 392                 RemovedInDjango30Warning,
 393             )
 394             order_by = (field_name,)
 395         elif fields:
 396             order_by = fields
 397         else:
 398             order_by = getattr(self.model._meta, ‘get_latest_by‘)
 399             if order_by and not isinstance(order_by, (tuple, list)):
 400                 order_by = (order_by,)
 401         if order_by is None:
 402             raise ValueError(
 403                 "earliest() and latest() require either fields as positional "
 404                 "arguments or ‘get_latest_by‘ in the model‘s Meta."
 405             )
 406
 407         assert self.query.can_filter(),  408             "Cannot change a query once a slice has been taken."
 409         obj = self._chain()
 410         obj.query.set_limits(high=1)
 411         obj.query.clear_ordering(force_empty=True)
 412         obj.query.add_ordering(*order_by)
 413         return obj.get()
 414
 415     def earliest(self, *fields, field_name=None):
 416         return self._earliest_or_latest(*fields, field_name=field_name)
 417
 418     def latest(self, *fields, field_name=None):
 419         return self.reverse()._earliest_or_latest(*fields, field_name=field_name)
 420
 421     def first(self):
 422         """Return the first object of a query or None if no match is found."""
 423         for obj in (self if self.ordered else self.order_by(‘pk‘))[:1]:
 424             return obj
 425
 426     def last(self):
 427         """Return the last object of a query or None if no match is found."""
 428         for obj in (self.reverse() if self.ordered else self.order_by(‘-pk‘))[:1]:
 429             return obj
 430
 431     def in_bulk(self, id_list=None, *, field_name=‘pk‘):
 432         """
 433         Return a dictionary mapping each of the given IDs to the object with
 434         that ID. If `id_list` isn‘t provided, evaluate the entire QuerySet.
 435         """
 436         assert self.query.can_filter(),  437             "Cannot use ‘limit‘ or ‘offset‘ with in_bulk"
 438         if field_name != ‘pk‘ and not self.model._meta.get_field(field_name).unique:
 439             raise ValueError("in_bulk()‘s field_name must be a unique field but %r isn‘t." % field_name)
 440         if id_list is not None:
 441             if not id_list:
 442                 return {}
 443             filter_key = ‘{}__in‘.format(field_name)
 444             batch_size = connections[self.db].features.max_query_params
 445             id_list = tuple(id_list)
 446             # If the database has a limit on the number of query parameters
 447             # (e.g. SQLite), retrieve objects in batches if necessary.
 448             if batch_size and batch_size < len(id_list):
 449                 qs = ()
 450                 for offset in range(0, len(id_list), batch_size):
 451                     batch = id_list[offset:offset + batch_size]
 452                     qs += tuple(self.filter(**{filter_key: batch}).order_by())
 453             else:
 454                 qs = self.filter(**{filter_key: id_list}).order_by()
 455         else:
 456             qs = self._chain()
 457         return {getattr(obj, field_name): obj for obj in qs}
 458
 459     def delete(self):
 460         """Delete the records in the current QuerySet."""
 461         assert self.query.can_filter(),  462             "Cannot use ‘limit‘ or ‘offset‘ with delete."
 463
 464         if self._fields is not None:
 465             raise TypeError("Cannot call delete() after .values() or .values_list()")
 466
 467         del_query = self._chain()
 468
 469         # The delete is actually 2 queries - one to find related objects,
 470         # and one to delete. Make sure that the discovery of related
 471         # objects is performed on the same database as the deletion.
 472         del_query._for_write = True
 473
 474         # Disable non-supported fields.
 475         del_query.query.select_for_update = False
 476         del_query.query.select_related = False
 477         del_query.query.clear_ordering(force_empty=True)
 478
 479         collector = Collector(using=del_query.db)
 480         collector.collect(del_query)
 481         deleted, _rows_count = collector.delete()
 482
 483         # Clear the result cache, in case this QuerySet gets reused.
 484         self._result_cache = None
 485         return deleted, _rows_count
 486
 487     delete.alters_data = True
 488     delete.queryset_only = True
 489
 490     def _raw_delete(self, using):
 491         """
 492         Delete objects found from the given queryset in single direct SQL
 493         query. No signals are sent and there is no protection for cascades.
 494         """
 495         return sql.DeleteQuery(self.model).delete_qs(self, using)
 496     _raw_delete.alters_data = True
 497
 498     def update(self, **kwargs):
 499         """
 500         Update all elements in the current QuerySet, setting all the given
 501         fields to the appropriate values.
 502         """
 503         assert self.query.can_filter(),  504             "Cannot update a query once a slice has been taken."
 505         self._for_write = True
 506         query = self.query.chain(sql.UpdateQuery)
 507         query.add_update_values(kwargs)
 508         # Clear any annotations so that they won‘t be present in subqueries.
 509         query._annotations = None
 510         with transaction.atomic(using=self.db, savepoint=False):
 511             rows = query.get_compiler(self.db).execute_sql(CURSOR)
 512         self._result_cache = None
 513         return rows
 514     update.alters_data = True
 515
 516     def _update(self, values):
 517         """
 518         A version of update() that accepts field objects instead of field names.
 519         Used primarily for model saving and not intended for use by general
 520         code (it requires too much poking around at model internals to be
 521         useful at that level).
 522         """
 523         assert self.query.can_filter(),  524             "Cannot update a query once a slice has been taken."
 525         query = self.query.chain(sql.UpdateQuery)
 526         query.add_update_fields(values)
 527         self._result_cache = None
 528         return query.get_compiler(self.db).execute_sql(CURSOR)
 529     _update.alters_data = True
 530     _update.queryset_only = False
 531
 532     def exists(self):
 533         if self._result_cache is None:
 534             return self.query.has_results(using=self.db)
 535         return bool(self._result_cache)
 536
 537     def _prefetch_related_objects(self):
 538         # This method can only be called once the result cache has been filled.
 539         prefetch_related_objects(self._result_cache, *self._prefetch_related_lookups)
 540         self._prefetch_done = True
 541
 542     ##################################################
 543     # PUBLIC METHODS THAT RETURN A QUERYSET SUBCLASS #
 544     ##################################################
 545
 546     def raw(self, raw_query, params=None, translations=None, using=None):
 547         if using is None:
 548             using = self.db
 549         return RawQuerySet(raw_query, model=self.model, params=params, translations=translations, using=using)
 550
 551     def _values(self, *fields, **expressions):
 552         clone = self._chain()
 553         if expressions:
 554             clone = clone.annotate(**expressions)
 555         clone._fields = fields
 556         clone.query.set_values(fields)
 557         return clone
 558
 559     def values(self, *fields, **expressions):
 560         fields += tuple(expressions)
 561         clone = self._values(*fields, **expressions)
 562         clone._iterable_class = ValuesIterable
 563         return clone
 564
 565     def values_list(self, *fields, flat=False, named=False):
 566         if flat and named:
 567             raise TypeError("‘flat‘ and ‘named‘ can‘t be used together.")
 568         if flat and len(fields) > 1:
 569             raise TypeError("‘flat‘ is not valid when values_list is called with more than one field.")
 570
 571         field_names = {f for f in fields if not hasattr(f, ‘resolve_expression‘)}
 572         _fields = []
 573         expressions = {}
 574         counter = 1
 575         for field in fields:
 576             if hasattr(field, ‘resolve_expression‘):
 577                 field_id_prefix = getattr(field, ‘default_alias‘, field.__class__.__name__.lower())
 578                 while True:
 579                     field_id = field_id_prefix + str(counter)
 580                     counter += 1
 581                     if field_id not in field_names:
 582                         break
 583                 expressions[field_id] = field
 584                 _fields.append(field_id)
 585             else:
 586                 _fields.append(field)
 587
 588         clone = self._values(*_fields, **expressions)
 589         clone._iterable_class = (
 590             NamedValuesListIterable if named
 591             else FlatValuesListIterable if flat
 592             else ValuesListIterable
 593         )
 594         return clone
 595
 596     def dates(self, field_name, kind, order=‘ASC‘):
 597         """
 598         Return a list of date objects representing all available dates for
 599         the given field_name, scoped to ‘kind‘.
 600         """
 601         assert kind in ("year", "month", "day"),  602             "‘kind‘ must be one of ‘year‘, ‘month‘ or ‘day‘."
 603         assert order in (‘ASC‘, ‘DESC‘),  604             "‘order‘ must be either ‘ASC‘ or ‘DESC‘."
 605         return self.annotate(
 606             datefield=Trunc(field_name, kind, output_field=DateField()),
 607             plain_field=F(field_name)
 608         ).values_list(
 609             ‘datefield‘, flat=True
 610         ).distinct().filter(plain_field__isnull=False).order_by((‘-‘ if order == ‘DESC‘ else ‘‘) + ‘datefield‘)
 611
 612     def datetimes(self, field_name, kind, order=‘ASC‘, tzinfo=None):
 613         """
 614         Return a list of datetime objects representing all available
 615         datetimes for the given field_name, scoped to ‘kind‘.
 616         """
 617         assert kind in ("year", "month", "day", "hour", "minute", "second"),  618             "‘kind‘ must be one of ‘year‘, ‘month‘, ‘day‘, ‘hour‘, ‘minute‘ or ‘second‘."
 619         assert order in (‘ASC‘, ‘DESC‘),  620             "‘order‘ must be either ‘ASC‘ or ‘DESC‘."
 621         if settings.USE_TZ:
 622             if tzinfo is None:
 623                 tzinfo = timezone.get_current_timezone()
 624         else:
 625             tzinfo = None
 626         return self.annotate(
 627             datetimefield=Trunc(field_name, kind, output_field=DateTimeField(), tzinfo=tzinfo),
 628             plain_field=F(field_name)
 629         ).values_list(
 630             ‘datetimefield‘, flat=True
 631         ).distinct().filter(plain_field__isnull=False).order_by((‘-‘ if order == ‘DESC‘ else ‘‘) + ‘datetimefield‘)
 632
 633     def none(self):
 634         """Return an empty QuerySet."""
 635         clone = self._chain()
 636         clone.query.set_empty()
 637         return clone
 638
 639     ##################################################################
 640     # PUBLIC METHODS THAT ALTER ATTRIBUTES AND RETURN A NEW QUERYSET #
 641     ##################################################################
 642
 643     def all(self):
 644         """
 645         Return a new QuerySet that is a copy of the current one. This allows a
 646         QuerySet to proxy for a model manager in some cases.
 647         """
 648         return self._chain()
 649
 650     def filter(self, *args, **kwargs):
 651         """
 652         Return a new QuerySet instance with the args ANDed to the existing
 653         set.
 654         """
 655         return self._filter_or_exclude(False, *args, **kwargs)
 656
 657     def exclude(self, *args, **kwargs):
 658         """
 659         Return a new QuerySet instance with NOT (args) ANDed to the existing
 660         set.
 661         """
 662         return self._filter_or_exclude(True, *args, **kwargs)
 663
 664     def _filter_or_exclude(self, negate, *args, **kwargs):
 665         if args or kwargs:
 666             assert self.query.can_filter(),  667                 "Cannot filter a query once a slice has been taken."
 668
 669         clone = self._chain()
 670         if negate:
 671             clone.query.add_q(~Q(*args, **kwargs))
 672         else:
 673             clone.query.add_q(Q(*args, **kwargs))
 674         return clone
 675
 676     def complex_filter(self, filter_obj):
 677         """
 678         Return a new QuerySet instance with filter_obj added to the filters.
 679
 680         filter_obj can be a Q object or a dictionary of keyword lookup
 681         arguments.
 682
 683         This exists to support framework features such as ‘limit_choices_to‘,
 684         and usually it will be more natural to use other methods.
 685         """
 686         if isinstance(filter_obj, Q):
 687             clone = self._chain()
 688             clone.query.add_q(filter_obj)
 689             return clone
 690         else:
 691             return self._filter_or_exclude(None, **filter_obj)
 692
 693     def _combinator_query(self, combinator, *other_qs, all=False):
 694         # Clone the query to inherit the select list and everything
 695         clone = self._chain()
 696         # Clear limits and ordering so they can be reapplied
 697         clone.query.clear_ordering(True)
 698         clone.query.clear_limits()
 699         clone.query.combined_queries = (self.query,) + tuple(qs.query for qs in other_qs)
 700         clone.query.combinator = combinator
 701         clone.query.combinator_all = all
 702         return clone
 703
 704     def union(self, *other_qs, all=False):
 705         # If the query is an EmptyQuerySet, combine all nonempty querysets.
 706         if isinstance(self, EmptyQuerySet):
 707             qs = [q for q in other_qs if not isinstance(q, EmptyQuerySet)]
 708             return qs[0]._combinator_query(‘union‘, *qs[1:], all=all) if qs else self
 709         return self._combinator_query(‘union‘, *other_qs, all=all)
 710
 711     def intersection(self, *other_qs):
 712         # If any query is an EmptyQuerySet, return it.
 713         if isinstance(self, EmptyQuerySet):
 714             return self
 715         for other in other_qs:
 716             if isinstance(other, EmptyQuerySet):
 717                 return other
 718         return self._combinator_query(‘intersection‘, *other_qs)
 719
 720     def difference(self, *other_qs):
 721         # If the query is an EmptyQuerySet, return it.
 722         if isinstance(self, EmptyQuerySet):
 723             return self
 724         return self._combinator_query(‘difference‘, *other_qs)
 725
 726     def select_for_update(self, nowait=False, skip_locked=False, of=()):
 727         """
 728         Return a new QuerySet instance that will select objects with a
 729         FOR UPDATE lock.
 730         """
 731         if nowait and skip_locked:
 732             raise ValueError(‘The nowait option cannot be used with skip_locked.‘)
 733         obj = self._chain()
 734         obj._for_write = True
 735         obj.query.select_for_update = True
 736         obj.query.select_for_update_nowait = nowait
 737         obj.query.select_for_update_skip_locked = skip_locked
 738         obj.query.select_for_update_of = of
 739         return obj
 740
 741     def select_related(self, *fields):
 742         """
 743         Return a new QuerySet instance that will select related objects.
 744
 745         If fields are specified, they must be ForeignKey fields and only those
 746         related objects are included in the selection.
 747
 748         If select_related(None) is called, clear the list.
 749         """
 750
 751         if self._fields is not None:
 752             raise TypeError("Cannot call select_related() after .values() or .values_list()")
 753
 754         obj = self._chain()
 755         if fields == (None,):
 756             obj.query.select_related = False
 757         elif fields:
 758             obj.query.add_select_related(fields)
 759         else:
 760             obj.query.select_related = True
 761         return obj
 762
 763     def prefetch_related(self, *lookups):
 764         """
 765         Return a new QuerySet instance that will prefetch the specified
 766         Many-To-One and Many-To-Many related objects when the QuerySet is
 767         evaluated.
 768
 769         When prefetch_related() is called more than once, append to the list of
 770         prefetch lookups. If prefetch_related(None) is called, clear the list.
 771         """
 772         clone = self._chain()
 773         if lookups == (None,):
 774             clone._prefetch_related_lookups = ()
 775         else:
 776             for lookup in lookups:
 777                 if isinstance(lookup, Prefetch):
 778                     lookup = lookup.prefetch_to
 779                 lookup = lookup.split(LOOKUP_SEP, 1)[0]
 780                 if lookup in self.query._filtered_relations:
 781                     raise ValueError(‘prefetch_related() is not supported with FilteredRelation.‘)
 782             clone._prefetch_related_lookups = clone._prefetch_related_lookups + lookups
 783         return clone
 784
 785     def annotate(self, *args, **kwargs):
 786         """
 787         Return a query set in which the returned objects have been annotated
 788         with extra data or aggregations.
 789         """
 790         self._validate_values_are_expressions(args + tuple(kwargs.values()), method_name=‘annotate‘)
 791         annotations = OrderedDict()  # To preserve ordering of args
 792         for arg in args:
 793             # The default_alias property may raise a TypeError.
 794             try:
 795                 if arg.default_alias in kwargs:
 796                     raise ValueError("The named annotation ‘%s‘ conflicts with the "
 797                                      "default name for another annotation."
 798                                      % arg.default_alias)
 799             except TypeError:
 800                 raise TypeError("Complex annotations require an alias")
 801             annotations[arg.default_alias] = arg
 802         annotations.update(kwargs)
 803
 804         clone = self._chain()
 805         names = self._fields
 806         if names is None:
 807             names = {f.name for f in self.model._meta.get_fields()}
 808
 809         for alias, annotation in annotations.items():
 810             if alias in names:
 811                 raise ValueError("The annotation ‘%s‘ conflicts with a field on "
 812                                  "the model." % alias)
 813             if isinstance(annotation, FilteredRelation):
 814                 clone.query.add_filtered_relation(annotation, alias)
 815             else:
 816                 clone.query.add_annotation(annotation, alias, is_summary=False)
 817
 818         for alias, annotation in clone.query.annotations.items():
 819             if alias in annotations and annotation.contains_aggregate:
 820                 if clone._fields is None:
 821                     clone.query.group_by = True
 822                 else:
 823                     clone.query.set_group_by()
 824                 break
 825
 826         return clone
 827
 828     def order_by(self, *field_names):
 829         """Return a new QuerySet instance with the ordering changed."""
 830         assert self.query.can_filter(),  831             "Cannot reorder a query once a slice has been taken."
 832         obj = self._chain()
 833         obj.query.clear_ordering(force_empty=False)
 834         obj.query.add_ordering(*field_names)
 835         return obj
 836
 837     def distinct(self, *field_names):
 838         """
 839         Return a new QuerySet instance that will select only distinct results.
 840         """
 841         assert self.query.can_filter(),  842             "Cannot create distinct fields once a slice has been taken."
 843         obj = self._chain()
 844         obj.query.add_distinct_fields(*field_names)
 845         return obj
 846
 847     def extra(self, select=None, where=None, params=None, tables=None,
 848               order_by=None, select_params=None):
 849         """Add extra SQL fragments to the query."""
 850         assert self.query.can_filter(),  851             "Cannot change a query once a slice has been taken"
 852         clone = self._chain()
 853         clone.query.add_extra(select, select_params, where, params, tables, order_by)
 854         return clone
 855
 856     def reverse(self):
 857         """Reverse the ordering of the QuerySet."""
 858         if not self.query.can_filter():
 859             raise TypeError(‘Cannot reverse a query once a slice has been taken.‘)
 860         clone = self._chain()
 861         clone.query.standard_ordering = not clone.query.standard_ordering
 862         return clone
 863
 864     def defer(self, *fields):
 865         """
 866         Defer the loading of data for certain fields until they are accessed.
 867         Add the set of deferred fields to any existing set of deferred fields.
 868         The only exception to this is if None is passed in as the only
 869         parameter, in which case removal all deferrals.
 870         """
 871         if self._fields is not None:
 872             raise TypeError("Cannot call defer() after .values() or .values_list()")
 873         clone = self._chain()
 874         if fields == (None,):
 875             clone.query.clear_deferred_loading()
 876         else:
 877             clone.query.add_deferred_loading(fields)
 878         return clone
 879
 880     def only(self, *fields):
 881         """
 882         Essentially, the opposite of defer(). Only the fields passed into this
 883         method and that are not already specified as deferred are loaded
 884         immediately when the queryset is evaluated.
 885         """
 886         if self._fields is not None:
 887             raise TypeError("Cannot call only() after .values() or .values_list()")
 888         if fields == (None,):
 889             # Can only pass None to defer(), not only(), as the rest option.
 890             # That won‘t stop people trying to do this, so let‘s be explicit.
 891             raise TypeError("Cannot pass None as an argument to only().")
 892         for field in fields:
 893             field = field.split(LOOKUP_SEP, 1)[0]
 894             if field in self.query._filtered_relations:
 895                 raise ValueError(‘only() is not supported with FilteredRelation.‘)
 896         clone = self._chain()
 897         clone.query.add_immediate_loading(fields)
 898         return clone
 899
 900     def using(self, alias):
 901         """Select which database this QuerySet should execute against."""
 902         clone = self._chain()
 903         clone._db = alias
 904         return clone
 905
 906     ###################################
 907     # PUBLIC INTROSPECTION ATTRIBUTES #
 908     ###################################
 909
 910     @property
 911     def ordered(self):
 912         """
 913         Return True if the QuerySet is ordered -- i.e. has an order_by()
 914         clause or a default ordering on the model.
 915         """
 916         if self.query.extra_order_by or self.query.order_by:
 917             return True
 918         elif self.query.default_ordering and self.query.get_meta().ordering:
 919             return True
 920         else:
 921             return False
 922
 923     @property
 924     def db(self):
 925         """Return the database used if this query is executed now."""
 926         if self._for_write:
 927             return self._db or router.db_for_write(self.model, **self._hints)
 928         return self._db or router.db_for_read(self.model, **self._hints)
 929
 930     ###################
 931     # PRIVATE METHODS #
 932     ###################
 933
 934     def _insert(self, objs, fields, return_id=False, raw=False, using=None):
 935         """
 936         Insert a new record for the given model. This provides an interface to
 937         the InsertQuery class and is how Model.save() is implemented.
 938         """
 939         self._for_write = True
 940         if using is None:
 941             using = self.db
 942         query = sql.InsertQuery(self.model)
 943         query.insert_values(fields, objs, raw=raw)
 944         return query.get_compiler(using=using).execute_sql(return_id)
 945     _insert.alters_data = True
 946     _insert.queryset_only = False
 947
 948     def _batched_insert(self, objs, fields, batch_size):
 949         """
 950         A helper method for bulk_create() to insert the bulk one batch at a
 951         time. Insert recursively a batch from the front of the bulk and then
 952         _batched_insert() the remaining objects again.
 953         """
 954         if not objs:
 955             return
 956         ops = connections[self.db].ops
 957         batch_size = (batch_size or max(ops.bulk_batch_size(fields, objs), 1))
 958         inserted_ids = []
 959         for item in [objs[i:i + batch_size] for i in range(0, len(objs), batch_size)]:
 960             if connections[self.db].features.can_return_ids_from_bulk_insert:
 961                 inserted_id = self._insert(item, fields=fields, using=self.db, return_id=True)
 962                 if isinstance(inserted_id, list):
 963                     inserted_ids.extend(inserted_id)
 964                 else:
 965                     inserted_ids.append(inserted_id)
 966             else:
 967                 self._insert(item, fields=fields, using=self.db)
 968         return inserted_ids
 969
 970     def _chain(self, **kwargs):
 971         """
 972         Return a copy of the current QuerySet that‘s ready for another
 973         operation.
 974         """
 975         obj = self._clone()
 976         if obj._sticky_filter:
 977             obj.query.filter_is_sticky = True
 978             obj._sticky_filter = False
 979         obj.__dict__.update(kwargs)
 980         return obj
 981
 982     def _clone(self):
 983         """
 984         Return a copy of the current QuerySet. A lightweight alternative
 985         to deepcopy().
 986         """
 987         c = self.__class__(model=self.model, query=self.query.chain(), using=self._db, hints=self._hints)
 988         c._sticky_filter = self._sticky_filter
 989         c._for_write = self._for_write
 990         c._prefetch_related_lookups = self._prefetch_related_lookups[:]
 991         c._known_related_objects = self._known_related_objects
 992         c._iterable_class = self._iterable_class
 993         c._fields = self._fields
 994         return c
 995
 996     def _fetch_all(self):
 997         if self._result_cache is None:
 998             self._result_cache = list(self._iterable_class(self))
 999         if self._prefetch_related_lookups and not self._prefetch_done:
1000             self._prefetch_related_objects()
1001
1002     def _next_is_sticky(self):
1003         """
1004         Indicate that the next filter call and the one following that should
1005         be treated as a single filter. This is only important when it comes to
1006         determining when to reuse tables for many-to-many filters. Required so
1007         that we can filter naturally on the results of related managers.
1008
1009         This doesn‘t return a clone of the current QuerySet (it returns
1010         "self"). The method is only used internally and should be immediately
1011         followed by a filter() that does create a clone.
1012         """
1013         self._sticky_filter = True
1014         return self
1015
1016     def _merge_sanity_check(self, other):
1017         """Check that two QuerySet classes may be merged."""
1018         if self._fields is not None and (
1019                 set(self.query.values_select) != set(other.query.values_select) or
1020                 set(self.query.extra_select) != set(other.query.extra_select) or
1021                 set(self.query.annotation_select) != set(other.query.annotation_select)):
1022             raise TypeError(
1023                 "Merging ‘%s‘ classes must involve the same values in each case."
1024                 % self.__class__.__name__
1025             )
1026
1027     def _merge_known_related_objects(self, other):
1028         """
1029         Keep track of all known related objects from either QuerySet instance.
1030         """
1031         for field, objects in other._known_related_objects.items():
1032             self._known_related_objects.setdefault(field, {}).update(objects)
1033
1034     def resolve_expression(self, *args, **kwargs):
1035         if self._fields and len(self._fields) > 1:
1036             # values() queryset can only be used as nested queries
1037             # if they are set up to select only a single field.
1038             raise TypeError(‘Cannot use multi-field values as a filter value.‘)
1039         query = self.query.resolve_expression(*args, **kwargs)
1040         query._db = self._db
1041         return query
1042     resolve_expression.queryset_only = True
1043
1044     def _add_hints(self, **hints):
1045         """
1046         Update hinting information for use by routers. Add new key/values or
1047         overwrite existing key/values.
1048         """
1049         self._hints.update(hints)
1050
1051     def _has_filters(self):
1052         """
1053         Check if this QuerySet has any filtering going on. This isn‘t
1054         equivalent with checking if all objects are present in results, for
1055         example, qs[1:]._has_filters() -> False.
1056         """
1057         return self.query.has_filters()
1058
1059     @staticmethod
1060     def _validate_values_are_expressions(values, method_name):
1061         invalid_args = sorted(str(arg) for arg in values if not hasattr(arg, ‘resolve_expression‘))
1062         if invalid_args:
1063             raise TypeError(
1064                 ‘QuerySet.%s() received non-expression(s): %s.‘ % (
1065                     method_name,
1066                     ‘, ‘.join(invalid_args),
1067                 )
1068             )
1069
1070
1071 class InstanceCheckMeta(type):
1072     def __instancecheck__(self, instance):
1073         return isinstance(instance, QuerySet) and instance.query.is_empty()
1074
1075
1076 class EmptyQuerySet(metaclass=InstanceCheckMeta):
1077     """
1078     Marker class to checking if a queryset is empty by .none():
1079         isinstance(qs.none(), EmptyQuerySet) -> True
1080     """
1081
1082     def __init__(self, *args, **kwargs):
1083         raise TypeError("EmptyQuerySet can‘t be instantiated")
1084
1085
1086 class RawQuerySet:
1087     """
1088     Provide an iterator which converts the results of raw SQL queries into
1089     annotated model instances.
1090     """
1091     def __init__(self, raw_query, model=None, query=None, params=None,
1092                  translations=None, using=None, hints=None):
1093         self.raw_query = raw_query
1094         self.model = model
1095         self._db = using
1096         self._hints = hints or {}
1097         self.query = query or sql.RawQuery(sql=raw_query, using=self.db, params=params)
1098         self.params = params or ()
1099         self.translations = translations or {}
1100
1101     def resolve_model_init_order(self):
1102         """Resolve the init field names and value positions."""
1103         converter = connections[self.db].introspection.column_name_converter
1104         model_init_fields = [f for f in self.model._meta.fields if converter(f.column) in self.columns]
1105         annotation_fields = [(column, pos) for pos, column in enumerate(self.columns)
1106                              if column not in self.model_fields]
1107         model_init_order = [self.columns.index(converter(f.column)) for f in model_init_fields]
1108         model_init_names = [f.attname for f in model_init_fields]
1109         return model_init_names, model_init_order, annotation_fields
1110
1111     def __iter__(self):
1112         # Cache some things for performance reasons outside the loop.
1113         db = self.db
1114         compiler = connections[db].ops.compiler(‘SQLCompiler‘)(
1115             self.query, connections[db], db
1116         )
1117
1118         query = iter(self.query)
1119
1120         try:
1121             model_init_names, model_init_pos, annotation_fields = self.resolve_model_init_order()
1122             if self.model._meta.pk.attname not in model_init_names:
1123                 raise InvalidQuery(‘Raw query must include the primary key‘)
1124             model_cls = self.model
1125             fields = [self.model_fields.get(c) for c in self.columns]
1126             converters = compiler.get_converters([
1127                 f.get_col(f.model._meta.db_table) if f else None for f in fields
1128             ])
1129             if converters:
1130                 query = compiler.apply_converters(query, converters)
1131             for values in query:
1132                 # Associate fields to values
1133                 model_init_values = [values[pos] for pos in model_init_pos]
1134                 instance = model_cls.from_db(db, model_init_names, model_init_values)
1135                 if annotation_fields:
1136                     for column, pos in annotation_fields:
1137                         setattr(instance, column, values[pos])
1138                 yield instance
1139         finally:
1140             # Done iterating the Query. If it has its own cursor, close it.
1141             if hasattr(self.query, ‘cursor‘) and self.query.cursor:
1142                 self.query.cursor.close()
1143
1144     def __repr__(self):
1145         return "<%s: %s>" % (self.__class__.__name__, self.query)
1146
1147     def __getitem__(self, k):
1148         return list(self)[k]
1149
1150     @property
1151     def db(self):
1152         """Return the database used if this query is executed now."""
1153         return self._db or router.db_for_read(self.model, **self._hints)
1154
1155     def using(self, alias):
1156         """Select the database this RawQuerySet should execute against."""
1157         return RawQuerySet(
1158             self.raw_query, model=self.model,
1159             query=self.query.chain(using=alias),
1160             params=self.params, translations=self.translations,
1161             using=alias,
1162         )
1163
1164     @cached_property
1165     def columns(self):
1166         """
1167         A list of model field names in the order they‘ll appear in the
1168         query results.
1169         """
1170         columns = self.query.get_columns()
1171         # Adjust any column names which don‘t match field names
1172         for (query_name, model_name) in self.translations.items():
1173             # Ignore translations for nonexistent column names
1174             try:
1175                 index = columns.index(query_name)
1176             except ValueError:
1177                 pass
1178             else:
1179                 columns[index] = model_name
1180         return columns
1181
1182     @cached_property
1183     def model_fields(self):
1184         """A dict mapping column names to model field names."""
1185         converter = connections[self.db].introspection.table_name_converter
1186         model_fields = {}
1187         for field in self.model._meta.fields:
1188             name, column = field.get_attname_column()
1189             model_fields[converter(column)] = field
1190         return model_fields
1191
1192
1193 class Prefetch:
1194     def __init__(self, lookup, queryset=None, to_attr=None):
1195         # `prefetch_through` is the path we traverse to perform the prefetch.
1196         self.prefetch_through = lookup
1197         # `prefetch_to` is the path to the attribute that stores the result.
1198         self.prefetch_to = lookup
1199         if queryset is not None and not issubclass(queryset._iterable_class, ModelIterable):
1200             raise ValueError(‘Prefetch querysets cannot use values().‘)
1201         if to_attr:
1202             self.prefetch_to = LOOKUP_SEP.join(lookup.split(LOOKUP_SEP)[:-1] + [to_attr])
1203
1204         self.queryset = queryset
1205         self.to_attr = to_attr
1206
1207     def __getstate__(self):
1208         obj_dict = self.__dict__.copy()
1209         if self.queryset is not None:
1210             # Prevent the QuerySet from being evaluated
1211             obj_dict[‘queryset‘] = self.queryset._chain(
1212                 _result_cache=[],
1213                 _prefetch_done=True,
1214             )
1215         return obj_dict
1216
1217     def add_prefix(self, prefix):
1218         self.prefetch_through = prefix + LOOKUP_SEP + self.prefetch_through
1219         self.prefetch_to = prefix + LOOKUP_SEP + self.prefetch_to
1220
1221     def get_current_prefetch_to(self, level):
1222         return LOOKUP_SEP.join(self.prefetch_to.split(LOOKUP_SEP)[:level + 1])
1223
1224     def get_current_to_attr(self, level):
1225         parts = self.prefetch_to.split(LOOKUP_SEP)
1226         to_attr = parts[level]
1227         as_attr = self.to_attr and level == len(parts) - 1
1228         return to_attr, as_attr
1229
1230     def get_current_queryset(self, level):
1231         if self.get_current_prefetch_to(level) == self.prefetch_to:
1232             return self.queryset
1233         return None
1234
1235     def __eq__(self, other):
1236         if isinstance(other, Prefetch):
1237             return self.prefetch_to == other.prefetch_to
1238         return False
1239
1240     def __hash__(self):
1241         return hash(self.__class__) ^ hash(self.prefetch_to)
1242
1243
1244 def normalize_prefetch_lookups(lookups, prefix=None):
1245     """Normalize lookups into Prefetch objects."""
1246     ret = []
1247     for lookup in lookups:
1248         if not isinstance(lookup, Prefetch):
1249             lookup = Prefetch(lookup)
1250         if prefix:
1251             lookup.add_prefix(prefix)
1252         ret.append(lookup)
1253     return ret
1254
1255
1256 def prefetch_related_objects(model_instances, *related_lookups):
1257     """
1258     Populate prefetched object caches for a list of model instances based on
1259     the lookups/Prefetch instances given.
1260     """
1261     if len(model_instances) == 0:
1262         return  # nothing to do
1263
1264     # We need to be able to dynamically add to the list of prefetch_related
1265     # lookups that we look up (see below).  So we need some book keeping to
1266     # ensure we don‘t do duplicate work.
1267     done_queries = {}    # dictionary of things like ‘foo__bar‘: [results]
1268
1269     auto_lookups = set()  # we add to this as we go through.
1270     followed_descriptors = set()  # recursion protection
1271
1272     all_lookups = normalize_prefetch_lookups(reversed(related_lookups))
1273     while all_lookups:
1274         lookup = all_lookups.pop()
1275         if lookup.prefetch_to in done_queries:
1276             if lookup.queryset:
1277                 raise ValueError("‘%s‘ lookup was already seen with a different queryset. "
1278                                  "You may need to adjust the ordering of your lookups." % lookup.prefetch_to)
1279
1280             continue
1281
1282         # Top level, the list of objects to decorate is the result cache
1283         # from the primary QuerySet. It won‘t be for deeper levels.
1284         obj_list = model_instances
1285
1286         through_attrs = lookup.prefetch_through.split(LOOKUP_SEP)
1287         for level, through_attr in enumerate(through_attrs):
1288             # Prepare main instances
1289             if len(obj_list) == 0:
1290                 break
1291
1292             prefetch_to = lookup.get_current_prefetch_to(level)
1293             if prefetch_to in done_queries:
1294                 # Skip any prefetching, and any object preparation
1295                 obj_list = done_queries[prefetch_to]
1296                 continue
1297
1298             # Prepare objects:
1299             good_objects = True
1300             for obj in obj_list:
1301                 # Since prefetching can re-use instances, it is possible to have
1302                 # the same instance multiple times in obj_list, so obj might
1303                 # already be prepared.
1304                 if not hasattr(obj, ‘_prefetched_objects_cache‘):
1305                     try:
1306                         obj._prefetched_objects_cache = {}
1307                     except (AttributeError, TypeError):
1308                         # Must be an immutable object from
1309                         # values_list(flat=True), for example (TypeError) or
1310                         # a QuerySet subclass that isn‘t returning Model
1311                         # instances (AttributeError), either in Django or a 3rd
1312                         # party. prefetch_related() doesn‘t make sense, so quit.
1313                         good_objects = False
1314                         break
1315             if not good_objects:
1316                 break
1317
1318             # Descend down tree
1319
1320             # We assume that objects retrieved are homogeneous (which is the premise
1321             # of prefetch_related), so what applies to first object applies to all.
1322             first_obj = obj_list[0]
1323             to_attr = lookup.get_current_to_attr(level)[0]
1324             prefetcher, descriptor, attr_found, is_fetched = get_prefetcher(first_obj, through_attr, to_attr)
1325
1326             if not attr_found:
1327                 raise AttributeError("Cannot find ‘%s‘ on %s object, ‘%s‘ is an invalid "
1328                                      "parameter to prefetch_related()" %
1329                                      (through_attr, first_obj.__class__.__name__, lookup.prefetch_through))
1330
1331             if level == len(through_attrs) - 1 and prefetcher is None:
1332                 # Last one, this *must* resolve to something that supports
1333                 # prefetching, otherwise there is no point adding it and the
1334                 # developer asking for it has made a mistake.
1335                 raise ValueError("‘%s‘ does not resolve to an item that supports "
1336                                  "prefetching - this is an invalid parameter to "
1337                                  "prefetch_related()." % lookup.prefetch_through)
1338
1339             if prefetcher is not None and not is_fetched:
1340                 obj_list, additional_lookups = prefetch_one_level(obj_list, prefetcher, lookup, level)
1341                 # We need to ensure we don‘t keep adding lookups from the
1342                 # same relationships to stop infinite recursion. So, if we
1343                 # are already on an automatically added lookup, don‘t add
1344                 # the new lookups from relationships we‘ve seen already.
1345                 if not (lookup in auto_lookups and descriptor in followed_descriptors):
1346                     done_queries[prefetch_to] = obj_list
1347                     new_lookups = normalize_prefetch_lookups(reversed(additional_lookups), prefetch_to)
1348                     auto_lookups.update(new_lookups)
1349                     all_lookups.extend(new_lookups)
1350                 followed_descriptors.add(descriptor)
1351             else:
1352                 # Either a singly related object that has already been fetched
1353                 # (e.g. via select_related), or hopefully some other property
1354                 # that doesn‘t support prefetching but needs to be traversed.
1355
1356                 # We replace the current list of parent objects with the list
1357                 # of related objects, filtering out empty or missing values so
1358                 # that we can continue with nullable or reverse relations.
1359                 new_obj_list = []
1360                 for obj in obj_list:
1361                     if through_attr in getattr(obj, ‘_prefetched_objects_cache‘, ()):
1362                         # If related objects have been prefetched, use the
1363                         # cache rather than the object‘s through_attr.
1364                         new_obj = list(obj._prefetched_objects_cache.get(through_attr))
1365                     else:
1366                         try:
1367                             new_obj = getattr(obj, through_attr)
1368                         except exceptions.ObjectDoesNotExist:
1369                             continue
1370                     if new_obj is None:
1371                         continue
1372                     # We special-case `list` rather than something more generic
1373                     # like `Iterable` because we don‘t want to accidentally match
1374                     # user models that define __iter__.
1375                     if isinstance(new_obj, list):
1376                         new_obj_list.extend(new_obj)
1377                     else:
1378                         new_obj_list.append(new_obj)
1379                 obj_list = new_obj_list
1380
1381
1382 def get_prefetcher(instance, through_attr, to_attr):
1383     """
1384     For the attribute ‘through_attr‘ on the given instance, find
1385     an object that has a get_prefetch_queryset().
1386     Return a 4 tuple containing:
1387     (the object with get_prefetch_queryset (or None),
1388      the descriptor object representing this relationship (or None),
1389      a boolean that is False if the attribute was not found at all,
1390      a boolean that is True if the attribute has already been fetched)
1391     """
1392     prefetcher = None
1393     is_fetched = False
1394
1395     # For singly related objects, we have to avoid getting the attribute
1396     # from the object, as this will trigger the query. So we first try
1397     # on the class, in order to get the descriptor object.
1398     rel_obj_descriptor = getattr(instance.__class__, through_attr, None)
1399     if rel_obj_descriptor is None:
1400         attr_found = hasattr(instance, through_attr)
1401     else:
1402         attr_found = True
1403         if rel_obj_descriptor:
1404             # singly related object, descriptor object has the
1405             # get_prefetch_queryset() method.
1406             if hasattr(rel_obj_descriptor, ‘get_prefetch_queryset‘):
1407                 prefetcher = rel_obj_descriptor
1408                 if rel_obj_descriptor.is_cached(instance):
1409                     is_fetched = True
1410             else:
1411                 # descriptor doesn‘t support prefetching, so we go ahead and get
1412                 # the attribute on the instance rather than the class to
1413                 # support many related managers
1414                 rel_obj = getattr(instance, through_attr)
1415                 if hasattr(rel_obj, ‘get_prefetch_queryset‘):
1416                     prefetcher = rel_obj
1417                 if through_attr != to_attr:
1418                     # Special case cached_property instances because hasattr
1419                     # triggers attribute computation and assignment.
1420                     if isinstance(getattr(instance.__class__, to_attr, None), cached_property):
1421                         is_fetched = to_attr in instance.__dict__
1422                     else:
1423                         is_fetched = hasattr(instance, to_attr)
1424                 else:
1425                     is_fetched = through_attr in instance._prefetched_objects_cache
1426     return prefetcher, rel_obj_descriptor, attr_found, is_fetched
1427
1428
1429 def prefetch_one_level(instances, prefetcher, lookup, level):
1430     """
1431     Helper function for prefetch_related_objects().
1432
1433     Run prefetches on all instances using the prefetcher object,
1434     assigning results to relevant caches in instance.
1435
1436     Return the prefetched objects along with any additional prefetches that
1437     must be done due to prefetch_related lookups found from default managers.
1438     """
1439     # prefetcher must have a method get_prefetch_queryset() which takes a list
1440     # of instances, and returns a tuple:
1441
1442     # (queryset of instances of self.model that are related to passed in instances,
1443     #  callable that gets value to be matched for returned instances,
1444     #  callable that gets value to be matched for passed in instances,
1445     #  boolean that is True for singly related objects,
1446     #  cache or field name to assign to,
1447     #  boolean that is True when the previous argument is a cache name vs a field name).
1448
1449     # The ‘values to be matched‘ must be hashable as they will be used
1450     # in a dictionary.
1451
1452     rel_qs, rel_obj_attr, instance_attr, single, cache_name, is_descriptor = (
1453         prefetcher.get_prefetch_queryset(instances, lookup.get_current_queryset(level)))
1454     # We have to handle the possibility that the QuerySet we just got back
1455     # contains some prefetch_related lookups. We don‘t want to trigger the
1456     # prefetch_related functionality by evaluating the query. Rather, we need
1457     # to merge in the prefetch_related lookups.
1458     # Copy the lookups in case it is a Prefetch object which could be reused
1459     # later (happens in nested prefetch_related).
1460     additional_lookups = [
1461         copy.copy(additional_lookup) for additional_lookup
1462         in getattr(rel_qs, ‘_prefetch_related_lookups‘, ())
1463     ]
1464     if additional_lookups:
1465         # Don‘t need to clone because the manager should have given us a fresh
1466         # instance, so we access an internal instead of using public interface
1467         # for performance reasons.
1468         rel_qs._prefetch_related_lookups = ()
1469
1470     all_related_objects = list(rel_qs)
1471
1472     rel_obj_cache = {}
1473     for rel_obj in all_related_objects:
1474         rel_attr_val = rel_obj_attr(rel_obj)
1475         rel_obj_cache.setdefault(rel_attr_val, []).append(rel_obj)
1476
1477     to_attr, as_attr = lookup.get_current_to_attr(level)
1478     # Make sure `to_attr` does not conflict with a field.
1479     if as_attr and instances:
1480         # We assume that objects retrieved are homogeneous (which is the premise
1481         # of prefetch_related), so what applies to first object applies to all.
1482         model = instances[0].__class__
1483         try:
1484             model._meta.get_field(to_attr)
1485         except exceptions.FieldDoesNotExist:
1486             pass
1487         else:
1488             msg = ‘to_attr={} conflicts with a field on the {} model.‘
1489             raise ValueError(msg.format(to_attr, model.__name__))
1490
1491     # Whether or not we‘re prefetching the last part of the lookup.
1492     leaf = len(lookup.prefetch_through.split(LOOKUP_SEP)) - 1 == level
1493
1494     for obj in instances:
1495         instance_attr_val = instance_attr(obj)
1496         vals = rel_obj_cache.get(instance_attr_val, [])
1497
1498         if single:
1499             val = vals[0] if vals else None
1500             if as_attr:
1501                 # A to_attr has been given for the prefetch.
1502                 setattr(obj, to_attr, val)
1503             elif is_descriptor:
1504                 # cache_name points to a field name in obj.
1505                 # This field is a descriptor for a related object.
1506                 setattr(obj, cache_name, val)
1507             else:
1508                 # No to_attr has been given for this prefetch operation and the
1509                 # cache_name does not point to a descriptor. Store the value of
1510                 # the field in the object‘s field cache.
1511                 obj._state.fields_cache[cache_name] = val
1512         else:
1513             if as_attr:
1514                 setattr(obj, to_attr, vals)
1515             else:
1516                 manager = getattr(obj, to_attr)
1517                 if leaf and lookup.queryset is not None:
1518                     qs = manager._apply_rel_filters(lookup.queryset)
1519                 else:
1520                     qs = manager.get_queryset()
1521                 qs._result_cache = vals
1522                 # We don‘t want the individual qs doing prefetch_related now,
1523                 # since we have merged this into the current work.
1524                 qs._prefetch_done = True
1525                 obj._prefetched_objects_cache[cache_name] = qs
1526     return all_related_objects, additional_lookups

自定义管理器:

修改Manager的初始的QuerySet

在默认的情况下QuerySet返回的是整个模型类中的所有对象。例如 在shell下进行测试

>>> from queryset_demo.models import *
>>> Blog.objects.all()
<QuerySet [<Blog: change_new_name>, <Blog: create_test>, <Blog: Cheddar Talk>, <Blog: blog_3>, <Blog: Tom>, <Blog: new>, <Blog: new>]>#想要获取所有书的书名
>>> for blg in Blog.objects.all():
...     print(blg.name)
... # 原谅所有的数据并没有太正规
change_new_name
create_test
Cheddar Talk
blog_3
Tom
new
new

通过自定义Manager管理器的方式试得返回的QuerySet为所有博客的名字

step-1在models.py文件中

class Blog_name(models.Manager):
    def get_queryset(self):
        return super().get_queryset().values_list(‘name‘,flat=True)

class Blog(models.Model):
    name = models.CharField(max_length=100)
    tagline = models.TextField()

    objects = models.Manager() # 默认的管理器
    get_blog_name = Blog_name() # 自定义管理器

    def __str__(self):
        return self.name

step-2在shell环境下测试

>>> from queryset_demo.models import *
>>> Blog.objects.all()
<QuerySet [<Blog: change_new_name>, <Blog: create_test>, <Blog: Cheddar Talk>, <Blog: blog_3>, <Blog: Tom>, <Blog: new>, <Blog: new>]>
>>> Blog.get_blog_name.all()
<QuerySet [‘change_new_name‘, ‘create_test‘, ‘Cheddar Talk‘, ‘blog_3‘, ‘Tom‘, ‘new‘, ‘new‘]>

总结在 get_queryset(self)方法中返回的是一个查询集,所以我们在使用的时候可以在这个基础上使用QuerySet的Api返回我们想要的结果。

原文地址:https://www.cnblogs.com/Echo-O/p/9323621.html

时间: 2024-10-09 01:56:00

Django--Managers的相关文章

启动celery的时候提示:AttributeError: &#39;module&#39; object has no attribute &#39;commit_manually&#39;错误

1. 首先进入虚拟环境: ***:~/piaoshifu_object/epiao.piaoshifu.cn$ source /home/wyl/piaoshifu_virtualenv/epiaoenv/bin/activate (epiaoenv) ***:~/piaoshifu_object/epiao.piaoshifu.cn$ 2. 启动celery: ***:~/piaoshifu_object/epiao.piaoshifu.cn$ python manage.py celery

Django 1.6 最佳实践: 如何正确使用 Signal(转)

原文:http://www.weiguda.com/blog/38/ 如何正确的使用signal: 简单回答是: 在其他方法无法使用的情况下, 才最后考虑使用signal. 因为新的django开发人员得知signal之后, 往往会很高兴去使用它. 他们在能使用signal的地方就使用signal, 并且这是他们觉得自己是django专家一样. 然而, 像这样编码一段时间后, django项目就会变得异常复杂, 许多内容都纠结在一起无法解开. 许多开发者也会将django signal和异步消息

Django之安装与部署

安装(全部为Centos6.7环境下) python单一版本环境安装 (系统已自带python2.6情况下)注意!!由于系统已经自带python2.6,自己又编译安装了python2.7,如果此时直接用yum 安装yum install python-pip 会安装到python2.6上,必须编译安装pip才行,而pip的安装又依赖setuptools,系统自带了0.6的setuptools,我们必须下载另一个setuptools手动安装,用python27 setup.py install 这

Django 实现WEB登陆

实现环境: 1.System version:rh6.5 2.Python version:2.6.6 3.Django version:1.2.7 创建项目: 1.[[email protected] ~]#django-admin.py startproject mysite 2.[[email protected] mysite]#python manage.py startapp app01 3.[[email protected] mysite]#mkdir templates 4.[

Django 实现WEB登陆(第二版)

实现环境: 1.System version:rh6.5 2.Python version:2.6.6 3.Django version:1.2.7 创建项目: 1.[[email protected] ~]#django-admin.py startproject mysite 2.[[email protected] mysite]#python manage.py startapp app01 3.[[email protected] mysite]#mkdir templates 4.[

django 简单的邮件系统

django邮件系统 Django发送邮件官方中文文档 总结如下: 1.首先这份文档看三两遍是不行的,很多东西再看一遍就通顺了. 2.send_mail().send_mass_mail()都是对EmailMessage类使用方式的一个轻度封装,所以要关注底层的EmailMessage. 3.异常处理防止邮件头注入. 4.一定要弄懂Email backends 邮件发送后端 5.多线程的邮件发送. 个人简单配置如下: 首先是settings.py文件 #settings.py #邮件配置 EMA

django book学习笔记——模型高级进阶

1.访问外键(Foreign Key)值 当你获取一个为ForeignKey 字段时,你会得到相关的数据模型对象. 例如: >>> b = Book.objects.get(id=50) >>> b.publisher <Publisher: Apress Publishing> >>> b.publisher.website u'http://www.apress.com/' 对于用 ForeignKey 来定义的关系来说,在关系的另一端

Django中的Model继承

Django 中的 model 继承和 Python 中的类继承非常相似,只不过你要选择具体的实现方式:让父 model 拥有独立的数据库:还是让父 model 只包含基本的公共信息,而这些信息只能由子 model 呈现. Django中有三种继承关系: 1.通常,你只是想用父 model 来保存那些你不想在子 model 中重复录入的信息.父类是不使用的也就是不生成单独的数据表,这种情况下使用抽象基类继承 Abstract base classes. 2.如果你想从现有的Model继承并让每个

Django的日志如何配置?

Django对于日志输出的信息是很完善的,request的信息,setting配置,trackback的信息,一应俱全,足够我们调试了.但是在线上环境,如果让用户看到这些信息,是很不安全的(暴露代码).所以在线上我们要关闭Debug,但是又不能扔掉这些调试信息,这就要用到logging模块. logging模块其实是Python的模块,在Django开发中有很多本地化的支持. 理解Logger 首先要理解logging的工作,这里面主要有四个东西:格式器formatter,过滤器filter,处

django开发总结:

一,关于setting目录中的“DEBUG” DEBUG=False 把DEBUG从True改成False后就会出现404(必需指定404和500错语页面,如上图的目录结构)找不到页面的错误.原因是DEBUG为 True时django会默认帮我们处理静态文件,而为False的话还需要我们做点事.在全局usrs.py中加下如下代码: from django.conf import settings if settings.DEBUG is False: urlpatterns += pattern