Merge branch 'akpm' (patches from Andrew)
Pull updates from Andrew Morton: "Most of -mm and quite a number of other subsystems: hotfixes, scripts, ocfs2, misc, lib, binfmt, init, reiserfs, exec, dma-mapping, kcov. MM is fairly quiet this time. Holidays, I assume" * emailed patches from Andrew Morton <akpm@linux-foundation.org>: (118 commits) kcov: ignore fault-inject and stacktrace include/linux/io-mapping.h-mapping: use PHYS_PFN() macro in io_mapping_map_atomic_wc() execve: warn if process starts with executable stack reiserfs: prevent NULL pointer dereference in reiserfs_insert_item() init/main.c: fix misleading "This architecture does not have kernel memory protection" message init/main.c: fix quoted value handling in unknown_bootoption init/main.c: remove unnecessary repair_env_string in do_initcall_level init/main.c: log arguments and environment passed to init fs/binfmt_elf.c: coredump: allow process with empty address space to coredump fs/binfmt_elf.c: coredump: delete duplicated overflow check fs/binfmt_elf.c: coredump: allocate core ELF header on stack fs/binfmt_elf.c: make BAD_ADDR() unlikely fs/binfmt_elf.c: better codegen around current->mm fs/binfmt_elf.c: don't copy ELF header around fs/binfmt_elf.c: fix ->start_code calculation fs/binfmt_elf.c: smaller code generation around auxv vector fill lib/find_bit.c: uninline helper _find_next_bit() lib/find_bit.c: join _find_next_bit{_le} uapi: rename ext2_swab() to swab() and share globally in swab.h lib/scatterlist.c: adjust indentation in __sg_alloc_table ...
This commit is contained in:
commit
7eec11d3a7
|
@ -834,6 +834,18 @@
|
|||
dump out devices still on the deferred probe list after
|
||||
retrying.
|
||||
|
||||
dfltcc= [HW,S390]
|
||||
Format: { on | off | def_only | inf_only | always }
|
||||
on: s390 zlib hardware support for compression on
|
||||
level 1 and decompression (default)
|
||||
off: No s390 zlib hardware support
|
||||
def_only: s390 zlib hardware support for deflate
|
||||
only (compression on level 1)
|
||||
inf_only: s390 zlib hardware support for inflate
|
||||
only (decompression)
|
||||
always: Same as 'on' but ignores the selected compression
|
||||
level always using hardware support (used for debugging)
|
||||
|
||||
dhash_entries= [KNL]
|
||||
Set number of hash buckets for dentry cache.
|
||||
|
||||
|
|
|
@ -31,6 +31,7 @@ Core utilities
|
|||
generic-radix-tree
|
||||
memory-allocation
|
||||
mm-api
|
||||
pin_user_pages
|
||||
gfp_mask-from-fs-io
|
||||
timekeeping
|
||||
boot-time-mm
|
||||
|
|
232
Documentation/core-api/pin_user_pages.rst
Normal file
232
Documentation/core-api/pin_user_pages.rst
Normal file
|
@ -0,0 +1,232 @@
|
|||
.. SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
====================================================
|
||||
pin_user_pages() and related calls
|
||||
====================================================
|
||||
|
||||
.. contents:: :local:
|
||||
|
||||
Overview
|
||||
========
|
||||
|
||||
This document describes the following functions::
|
||||
|
||||
pin_user_pages()
|
||||
pin_user_pages_fast()
|
||||
pin_user_pages_remote()
|
||||
|
||||
Basic description of FOLL_PIN
|
||||
=============================
|
||||
|
||||
FOLL_PIN and FOLL_LONGTERM are flags that can be passed to the get_user_pages*()
|
||||
("gup") family of functions. FOLL_PIN has significant interactions and
|
||||
interdependencies with FOLL_LONGTERM, so both are covered here.
|
||||
|
||||
FOLL_PIN is internal to gup, meaning that it should not appear at the gup call
|
||||
sites. This allows the associated wrapper functions (pin_user_pages*() and
|
||||
others) to set the correct combination of these flags, and to check for problems
|
||||
as well.
|
||||
|
||||
FOLL_LONGTERM, on the other hand, *is* allowed to be set at the gup call sites.
|
||||
This is in order to avoid creating a large number of wrapper functions to cover
|
||||
all combinations of get*(), pin*(), FOLL_LONGTERM, and more. Also, the
|
||||
pin_user_pages*() APIs are clearly distinct from the get_user_pages*() APIs, so
|
||||
that's a natural dividing line, and a good point to make separate wrapper calls.
|
||||
In other words, use pin_user_pages*() for DMA-pinned pages, and
|
||||
get_user_pages*() for other cases. There are four cases described later on in
|
||||
this document, to further clarify that concept.
|
||||
|
||||
FOLL_PIN and FOLL_GET are mutually exclusive for a given gup call. However,
|
||||
multiple threads and call sites are free to pin the same struct pages, via both
|
||||
FOLL_PIN and FOLL_GET. It's just the call site that needs to choose one or the
|
||||
other, not the struct page(s).
|
||||
|
||||
The FOLL_PIN implementation is nearly the same as FOLL_GET, except that FOLL_PIN
|
||||
uses a different reference counting technique.
|
||||
|
||||
FOLL_PIN is a prerequisite to FOLL_LONGTERM. Another way of saying that is,
|
||||
FOLL_LONGTERM is a specific case, more restrictive case of FOLL_PIN.
|
||||
|
||||
Which flags are set by each wrapper
|
||||
===================================
|
||||
|
||||
For these pin_user_pages*() functions, FOLL_PIN is OR'd in with whatever gup
|
||||
flags the caller provides. The caller is required to pass in a non-null struct
|
||||
pages* array, and the function then pin pages by incrementing each by a special
|
||||
value. For now, that value is +1, just like get_user_pages*().::
|
||||
|
||||
Function
|
||||
--------
|
||||
pin_user_pages FOLL_PIN is always set internally by this function.
|
||||
pin_user_pages_fast FOLL_PIN is always set internally by this function.
|
||||
pin_user_pages_remote FOLL_PIN is always set internally by this function.
|
||||
|
||||
For these get_user_pages*() functions, FOLL_GET might not even be specified.
|
||||
Behavior is a little more complex than above. If FOLL_GET was *not* specified,
|
||||
but the caller passed in a non-null struct pages* array, then the function
|
||||
sets FOLL_GET for you, and proceeds to pin pages by incrementing the refcount
|
||||
of each page by +1.::
|
||||
|
||||
Function
|
||||
--------
|
||||
get_user_pages FOLL_GET is sometimes set internally by this function.
|
||||
get_user_pages_fast FOLL_GET is sometimes set internally by this function.
|
||||
get_user_pages_remote FOLL_GET is sometimes set internally by this function.
|
||||
|
||||
Tracking dma-pinned pages
|
||||
=========================
|
||||
|
||||
Some of the key design constraints, and solutions, for tracking dma-pinned
|
||||
pages:
|
||||
|
||||
* An actual reference count, per struct page, is required. This is because
|
||||
multiple processes may pin and unpin a page.
|
||||
|
||||
* False positives (reporting that a page is dma-pinned, when in fact it is not)
|
||||
are acceptable, but false negatives are not.
|
||||
|
||||
* struct page may not be increased in size for this, and all fields are already
|
||||
used.
|
||||
|
||||
* Given the above, we can overload the page->_refcount field by using, sort of,
|
||||
the upper bits in that field for a dma-pinned count. "Sort of", means that,
|
||||
rather than dividing page->_refcount into bit fields, we simple add a medium-
|
||||
large value (GUP_PIN_COUNTING_BIAS, initially chosen to be 1024: 10 bits) to
|
||||
page->_refcount. This provides fuzzy behavior: if a page has get_page() called
|
||||
on it 1024 times, then it will appear to have a single dma-pinned count.
|
||||
And again, that's acceptable.
|
||||
|
||||
This also leads to limitations: there are only 31-10==21 bits available for a
|
||||
counter that increments 10 bits at a time.
|
||||
|
||||
TODO: for 1GB and larger huge pages, this is cutting it close. That's because
|
||||
when pin_user_pages() follows such pages, it increments the head page by "1"
|
||||
(where "1" used to mean "+1" for get_user_pages(), but now means "+1024" for
|
||||
pin_user_pages()) for each tail page. So if you have a 1GB huge page:
|
||||
|
||||
* There are 256K (18 bits) worth of 4 KB tail pages.
|
||||
* There are 21 bits available to count up via GUP_PIN_COUNTING_BIAS (that is,
|
||||
10 bits at a time)
|
||||
* There are 21 - 18 == 3 bits available to count. Except that there aren't,
|
||||
because you need to allow for a few normal get_page() calls on the head page,
|
||||
as well. Fortunately, the approach of using addition, rather than "hard"
|
||||
bitfields, within page->_refcount, allows for sharing these bits gracefully.
|
||||
But we're still looking at about 8 references.
|
||||
|
||||
This, however, is a missing feature more than anything else, because it's easily
|
||||
solved by addressing an obvious inefficiency in the original get_user_pages()
|
||||
approach of retrieving pages: stop treating all the pages as if they were
|
||||
PAGE_SIZE. Retrieve huge pages as huge pages. The callers need to be aware of
|
||||
this, so some work is required. Once that's in place, this limitation mostly
|
||||
disappears from view, because there will be ample refcounting range available.
|
||||
|
||||
* Callers must specifically request "dma-pinned tracking of pages". In other
|
||||
words, just calling get_user_pages() will not suffice; a new set of functions,
|
||||
pin_user_page() and related, must be used.
|
||||
|
||||
FOLL_PIN, FOLL_GET, FOLL_LONGTERM: when to use which flags
|
||||
==========================================================
|
||||
|
||||
Thanks to Jan Kara, Vlastimil Babka and several other -mm people, for describing
|
||||
these categories:
|
||||
|
||||
CASE 1: Direct IO (DIO)
|
||||
-----------------------
|
||||
There are GUP references to pages that are serving
|
||||
as DIO buffers. These buffers are needed for a relatively short time (so they
|
||||
are not "long term"). No special synchronization with page_mkclean() or
|
||||
munmap() is provided. Therefore, flags to set at the call site are: ::
|
||||
|
||||
FOLL_PIN
|
||||
|
||||
...but rather than setting FOLL_PIN directly, call sites should use one of
|
||||
the pin_user_pages*() routines that set FOLL_PIN.
|
||||
|
||||
CASE 2: RDMA
|
||||
------------
|
||||
There are GUP references to pages that are serving as DMA
|
||||
buffers. These buffers are needed for a long time ("long term"). No special
|
||||
synchronization with page_mkclean() or munmap() is provided. Therefore, flags
|
||||
to set at the call site are: ::
|
||||
|
||||
FOLL_PIN | FOLL_LONGTERM
|
||||
|
||||
NOTE: Some pages, such as DAX pages, cannot be pinned with longterm pins. That's
|
||||
because DAX pages do not have a separate page cache, and so "pinning" implies
|
||||
locking down file system blocks, which is not (yet) supported in that way.
|
||||
|
||||
CASE 3: Hardware with page faulting support
|
||||
-------------------------------------------
|
||||
Here, a well-written driver doesn't normally need to pin pages at all. However,
|
||||
if the driver does choose to do so, it can register MMU notifiers for the range,
|
||||
and will be called back upon invalidation. Either way (avoiding page pinning, or
|
||||
using MMU notifiers to unpin upon request), there is proper synchronization with
|
||||
both filesystem and mm (page_mkclean(), munmap(), etc).
|
||||
|
||||
Therefore, neither flag needs to be set.
|
||||
|
||||
In this case, ideally, neither get_user_pages() nor pin_user_pages() should be
|
||||
called. Instead, the software should be written so that it does not pin pages.
|
||||
This allows mm and filesystems to operate more efficiently and reliably.
|
||||
|
||||
CASE 4: Pinning for struct page manipulation only
|
||||
-------------------------------------------------
|
||||
Here, normal GUP calls are sufficient, so neither flag needs to be set.
|
||||
|
||||
page_dma_pinned(): the whole point of pinning
|
||||
=============================================
|
||||
|
||||
The whole point of marking pages as "DMA-pinned" or "gup-pinned" is to be able
|
||||
to query, "is this page DMA-pinned?" That allows code such as page_mkclean()
|
||||
(and file system writeback code in general) to make informed decisions about
|
||||
what to do when a page cannot be unmapped due to such pins.
|
||||
|
||||
What to do in those cases is the subject of a years-long series of discussions
|
||||
and debates (see the References at the end of this document). It's a TODO item
|
||||
here: fill in the details once that's worked out. Meanwhile, it's safe to say
|
||||
that having this available: ::
|
||||
|
||||
static inline bool page_dma_pinned(struct page *page)
|
||||
|
||||
...is a prerequisite to solving the long-running gup+DMA problem.
|
||||
|
||||
Another way of thinking about FOLL_GET, FOLL_PIN, and FOLL_LONGTERM
|
||||
===================================================================
|
||||
|
||||
Another way of thinking about these flags is as a progression of restrictions:
|
||||
FOLL_GET is for struct page manipulation, without affecting the data that the
|
||||
struct page refers to. FOLL_PIN is a *replacement* for FOLL_GET, and is for
|
||||
short term pins on pages whose data *will* get accessed. As such, FOLL_PIN is
|
||||
a "more severe" form of pinning. And finally, FOLL_LONGTERM is an even more
|
||||
restrictive case that has FOLL_PIN as a prerequisite: this is for pages that
|
||||
will be pinned longterm, and whose data will be accessed.
|
||||
|
||||
Unit testing
|
||||
============
|
||||
This file::
|
||||
|
||||
tools/testing/selftests/vm/gup_benchmark.c
|
||||
|
||||
has the following new calls to exercise the new pin*() wrapper functions:
|
||||
|
||||
* PIN_FAST_BENCHMARK (./gup_benchmark -a)
|
||||
* PIN_BENCHMARK (./gup_benchmark -b)
|
||||
|
||||
You can monitor how many total dma-pinned pages have been acquired and released
|
||||
since the system was booted, via two new /proc/vmstat entries: ::
|
||||
|
||||
/proc/vmstat/nr_foll_pin_requested
|
||||
/proc/vmstat/nr_foll_pin_requested
|
||||
|
||||
Those are both going to show zero, unless CONFIG_DEBUG_VM is set. This is
|
||||
because there is a noticeable performance drop in unpin_user_page(), when they
|
||||
are activated.
|
||||
|
||||
References
|
||||
==========
|
||||
|
||||
* `Some slow progress on get_user_pages() (Apr 2, 2019) <https://lwn.net/Articles/784574/>`_
|
||||
* `DMA and get_user_pages() (LPC: Dec 12, 2018) <https://lwn.net/Articles/774411/>`_
|
||||
* `The trouble with get_user_pages() (Apr 30, 2018) <https://lwn.net/Articles/753027/>`_
|
||||
|
||||
John Hubbard, October, 2019
|
|
@ -130,6 +130,19 @@ checking for the same-value filled pages during store operation. However, the
|
|||
existing pages which are marked as same-value filled pages remain stored
|
||||
unchanged in zswap until they are either loaded or invalidated.
|
||||
|
||||
To prevent zswap from shrinking pool when zswap is full and there's a high
|
||||
pressure on swap (this will result in flipping pages in and out zswap pool
|
||||
without any real benefit but with a performance drop for the system), a
|
||||
special parameter has been introduced to implement a sort of hysteresis to
|
||||
refuse taking pages into zswap pool until it has sufficient space if the limit
|
||||
has been hit. To set the threshold at which zswap would start accepting pages
|
||||
again after it became full, use the sysfs ``accept_threhsold_percent``
|
||||
attribute, e. g.::
|
||||
|
||||
echo 80 > /sys/module/zswap/parameters/accept_threhsold_percent
|
||||
|
||||
Setting this parameter to 100 will disable the hysteresis.
|
||||
|
||||
A debugfs interface is provided for various statistic about pool size, number
|
||||
of pages stored, same-value filled pages and various counters for the reasons
|
||||
pages are rejected.
|
||||
|
|
|
@ -103,7 +103,7 @@ static long mm_iommu_do_alloc(struct mm_struct *mm, unsigned long ua,
|
|||
for (entry = 0; entry < entries; entry += chunk) {
|
||||
unsigned long n = min(entries - entry, chunk);
|
||||
|
||||
ret = get_user_pages(ua + (entry << PAGE_SHIFT), n,
|
||||
ret = pin_user_pages(ua + (entry << PAGE_SHIFT), n,
|
||||
FOLL_WRITE | FOLL_LONGTERM,
|
||||
mem->hpages + entry, NULL);
|
||||
if (ret == n) {
|
||||
|
@ -167,9 +167,8 @@ static long mm_iommu_do_alloc(struct mm_struct *mm, unsigned long ua,
|
|||
return 0;
|
||||
|
||||
free_exit:
|
||||
/* free the reference taken */
|
||||
for (i = 0; i < pinned; i++)
|
||||
put_page(mem->hpages[i]);
|
||||
/* free the references taken */
|
||||
unpin_user_pages(mem->hpages, pinned);
|
||||
|
||||
vfree(mem->hpas);
|
||||
kfree(mem);
|
||||
|
@ -215,7 +214,8 @@ static void mm_iommu_unpin(struct mm_iommu_table_group_mem_t *mem)
|
|||
if (mem->hpas[i] & MM_IOMMU_TABLE_GROUP_PAGE_DIRTY)
|
||||
SetPageDirty(page);
|
||||
|
||||
put_page(page);
|
||||
unpin_user_page(page);
|
||||
|
||||
mem->hpas[i] = 0;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -30,13 +30,13 @@ extern unsigned char _compressed_start[];
|
|||
extern unsigned char _compressed_end[];
|
||||
|
||||
#ifdef CONFIG_HAVE_KERNEL_BZIP2
|
||||
#define HEAP_SIZE 0x400000
|
||||
#define BOOT_HEAP_SIZE 0x400000
|
||||
#else
|
||||
#define HEAP_SIZE 0x10000
|
||||
#define BOOT_HEAP_SIZE 0x10000
|
||||
#endif
|
||||
|
||||
static unsigned long free_mem_ptr = (unsigned long) _end;
|
||||
static unsigned long free_mem_end_ptr = (unsigned long) _end + HEAP_SIZE;
|
||||
static unsigned long free_mem_end_ptr = (unsigned long) _end + BOOT_HEAP_SIZE;
|
||||
|
||||
#ifdef CONFIG_KERNEL_GZIP
|
||||
#include "../../../../lib/decompress_inflate.c"
|
||||
|
@ -62,7 +62,7 @@ static unsigned long free_mem_end_ptr = (unsigned long) _end + HEAP_SIZE;
|
|||
#include "../../../../lib/decompress_unxz.c"
|
||||
#endif
|
||||
|
||||
#define decompress_offset ALIGN((unsigned long)_end + HEAP_SIZE, PAGE_SIZE)
|
||||
#define decompress_offset ALIGN((unsigned long)_end + BOOT_HEAP_SIZE, PAGE_SIZE)
|
||||
|
||||
unsigned long mem_safe_offset(void)
|
||||
{
|
||||
|
|
|
@ -14,6 +14,7 @@
|
|||
char __bootdata(early_command_line)[COMMAND_LINE_SIZE];
|
||||
struct ipl_parameter_block __bootdata_preserved(ipl_block);
|
||||
int __bootdata_preserved(ipl_block_valid);
|
||||
unsigned int __bootdata_preserved(zlib_dfltcc_support) = ZLIB_DFLTCC_FULL;
|
||||
|
||||
unsigned long __bootdata(vmalloc_size) = VMALLOC_DEFAULT_SIZE;
|
||||
unsigned long __bootdata(memory_end);
|
||||
|
@ -229,6 +230,19 @@ void parse_boot_command_line(void)
|
|||
if (!strcmp(param, "vmalloc") && val)
|
||||
vmalloc_size = round_up(memparse(val, NULL), PAGE_SIZE);
|
||||
|
||||
if (!strcmp(param, "dfltcc")) {
|
||||
if (!strcmp(val, "off"))
|
||||
zlib_dfltcc_support = ZLIB_DFLTCC_DISABLED;
|
||||
else if (!strcmp(val, "on"))
|
||||
zlib_dfltcc_support = ZLIB_DFLTCC_FULL;
|
||||
else if (!strcmp(val, "def_only"))
|
||||
zlib_dfltcc_support = ZLIB_DFLTCC_DEFLATE_ONLY;
|
||||
else if (!strcmp(val, "inf_only"))
|
||||
zlib_dfltcc_support = ZLIB_DFLTCC_INFLATE_ONLY;
|
||||
else if (!strcmp(val, "always"))
|
||||
zlib_dfltcc_support = ZLIB_DFLTCC_FULL_DEBUG;
|
||||
}
|
||||
|
||||
if (!strcmp(param, "noexec")) {
|
||||
rc = kstrtobool(val, &enabled);
|
||||
if (!rc && !enabled)
|
||||
|
|
|
@ -79,6 +79,13 @@ struct parmarea {
|
|||
char command_line[ARCH_COMMAND_LINE_SIZE]; /* 0x10480 */
|
||||
};
|
||||
|
||||
extern unsigned int zlib_dfltcc_support;
|
||||
#define ZLIB_DFLTCC_DISABLED 0
|
||||
#define ZLIB_DFLTCC_FULL 1
|
||||
#define ZLIB_DFLTCC_DEFLATE_ONLY 2
|
||||
#define ZLIB_DFLTCC_INFLATE_ONLY 3
|
||||
#define ZLIB_DFLTCC_FULL_DEBUG 4
|
||||
|
||||
extern int noexec_disabled;
|
||||
extern int memory_end_set;
|
||||
extern unsigned long memory_end;
|
||||
|
|
|
@ -111,6 +111,8 @@ unsigned long __bootdata_preserved(__etext_dma);
|
|||
unsigned long __bootdata_preserved(__sdma);
|
||||
unsigned long __bootdata_preserved(__edma);
|
||||
unsigned long __bootdata_preserved(__kaslr_offset);
|
||||
unsigned int __bootdata_preserved(zlib_dfltcc_support);
|
||||
EXPORT_SYMBOL(zlib_dfltcc_support);
|
||||
|
||||
unsigned long VMALLOC_START;
|
||||
EXPORT_SYMBOL(VMALLOC_START);
|
||||
|
@ -759,14 +761,6 @@ static void __init free_mem_detect_info(void)
|
|||
memblock_free(start, size);
|
||||
}
|
||||
|
||||
static void __init memblock_physmem_add(phys_addr_t start, phys_addr_t size)
|
||||
{
|
||||
memblock_dbg("memblock_physmem_add: [%#016llx-%#016llx]\n",
|
||||
start, start + size - 1);
|
||||
memblock_add_range(&memblock.memory, start, size, 0, 0);
|
||||
memblock_add_range(&memblock.physmem, start, size, 0, 0);
|
||||
}
|
||||
|
||||
static const char * __init get_mem_info_source(void)
|
||||
{
|
||||
switch (mem_detect.info_source) {
|
||||
|
@ -791,8 +785,10 @@ static void __init memblock_add_mem_detect_info(void)
|
|||
get_mem_info_source(), mem_detect.info_source);
|
||||
/* keep memblock lists close to the kernel */
|
||||
memblock_set_bottom_up(true);
|
||||
for_each_mem_detect_block(i, &start, &end)
|
||||
for_each_mem_detect_block(i, &start, &end) {
|
||||
memblock_add(start, end - start);
|
||||
memblock_physmem_add(start, end - start);
|
||||
}
|
||||
memblock_set_bottom_up(false);
|
||||
memblock_dump_all();
|
||||
}
|
||||
|
|
|
@ -27,6 +27,7 @@
|
|||
#include <linux/acpi.h>
|
||||
#include <linux/workqueue.h>
|
||||
#include <linux/uaccess.h>
|
||||
#include <linux/units.h>
|
||||
|
||||
#define PREFIX "ACPI: "
|
||||
|
||||
|
@ -172,7 +173,7 @@ struct acpi_thermal {
|
|||
struct acpi_handle_list devices;
|
||||
struct thermal_zone_device *thermal_zone;
|
||||
int tz_enabled;
|
||||
int kelvin_offset;
|
||||
int kelvin_offset; /* in millidegrees */
|
||||
struct work_struct thermal_check_work;
|
||||
};
|
||||
|
||||
|
@ -297,7 +298,8 @@ static int acpi_thermal_trips_update(struct acpi_thermal *tz, int flag)
|
|||
if (crt == -1) {
|
||||
tz->trips.critical.flags.valid = 0;
|
||||
} else if (crt > 0) {
|
||||
unsigned long crt_k = CELSIUS_TO_DECI_KELVIN(crt);
|
||||
unsigned long crt_k = celsius_to_deci_kelvin(crt);
|
||||
|
||||
/*
|
||||
* Allow override critical threshold
|
||||
*/
|
||||
|
@ -333,7 +335,7 @@ static int acpi_thermal_trips_update(struct acpi_thermal *tz, int flag)
|
|||
if (psv == -1) {
|
||||
status = AE_SUPPORT;
|
||||
} else if (psv > 0) {
|
||||
tmp = CELSIUS_TO_DECI_KELVIN(psv);
|
||||
tmp = celsius_to_deci_kelvin(psv);
|
||||
status = AE_OK;
|
||||
} else {
|
||||
status = acpi_evaluate_integer(tz->device->handle,
|
||||
|
@ -413,7 +415,7 @@ static int acpi_thermal_trips_update(struct acpi_thermal *tz, int flag)
|
|||
break;
|
||||
if (i == 1)
|
||||
tz->trips.active[0].temperature =
|
||||
CELSIUS_TO_DECI_KELVIN(act);
|
||||
celsius_to_deci_kelvin(act);
|
||||
else
|
||||
/*
|
||||
* Don't allow override higher than
|
||||
|
@ -421,9 +423,9 @@ static int acpi_thermal_trips_update(struct acpi_thermal *tz, int flag)
|
|||
*/
|
||||
tz->trips.active[i - 1].temperature =
|
||||
(tz->trips.active[i - 2].temperature <
|
||||
CELSIUS_TO_DECI_KELVIN(act) ?
|
||||
celsius_to_deci_kelvin(act) ?
|
||||
tz->trips.active[i - 2].temperature :
|
||||
CELSIUS_TO_DECI_KELVIN(act));
|
||||
celsius_to_deci_kelvin(act));
|
||||
break;
|
||||
} else {
|
||||
tz->trips.active[i].temperature = tmp;
|
||||
|
@ -519,7 +521,7 @@ static int thermal_get_temp(struct thermal_zone_device *thermal, int *temp)
|
|||
if (result)
|
||||
return result;
|
||||
|
||||
*temp = DECI_KELVIN_TO_MILLICELSIUS_WITH_OFFSET(tz->temperature,
|
||||
*temp = deci_kelvin_to_millicelsius_with_offset(tz->temperature,
|
||||
tz->kelvin_offset);
|
||||
return 0;
|
||||
}
|
||||
|
@ -624,7 +626,7 @@ static int thermal_get_trip_temp(struct thermal_zone_device *thermal,
|
|||
|
||||
if (tz->trips.critical.flags.valid) {
|
||||
if (!trip) {
|
||||
*temp = DECI_KELVIN_TO_MILLICELSIUS_WITH_OFFSET(
|
||||
*temp = deci_kelvin_to_millicelsius_with_offset(
|
||||
tz->trips.critical.temperature,
|
||||
tz->kelvin_offset);
|
||||
return 0;
|
||||
|
@ -634,7 +636,7 @@ static int thermal_get_trip_temp(struct thermal_zone_device *thermal,
|
|||
|
||||
if (tz->trips.hot.flags.valid) {
|
||||
if (!trip) {
|
||||
*temp = DECI_KELVIN_TO_MILLICELSIUS_WITH_OFFSET(
|
||||
*temp = deci_kelvin_to_millicelsius_with_offset(
|
||||
tz->trips.hot.temperature,
|
||||
tz->kelvin_offset);
|
||||
return 0;
|
||||
|
@ -644,7 +646,7 @@ static int thermal_get_trip_temp(struct thermal_zone_device *thermal,
|
|||
|
||||
if (tz->trips.passive.flags.valid) {
|
||||
if (!trip) {
|
||||
*temp = DECI_KELVIN_TO_MILLICELSIUS_WITH_OFFSET(
|
||||
*temp = deci_kelvin_to_millicelsius_with_offset(
|
||||
tz->trips.passive.temperature,
|
||||
tz->kelvin_offset);
|
||||
return 0;
|
||||
|
@ -655,7 +657,7 @@ static int thermal_get_trip_temp(struct thermal_zone_device *thermal,
|
|||
for (i = 0; i < ACPI_THERMAL_MAX_ACTIVE &&
|
||||
tz->trips.active[i].flags.valid; i++) {
|
||||
if (!trip) {
|
||||
*temp = DECI_KELVIN_TO_MILLICELSIUS_WITH_OFFSET(
|
||||
*temp = deci_kelvin_to_millicelsius_with_offset(
|
||||
tz->trips.active[i].temperature,
|
||||
tz->kelvin_offset);
|
||||
return 0;
|
||||
|
@ -672,7 +674,7 @@ static int thermal_get_crit_temp(struct thermal_zone_device *thermal,
|
|||
struct acpi_thermal *tz = thermal->devdata;
|
||||
|
||||
if (tz->trips.critical.flags.valid) {
|
||||
*temperature = DECI_KELVIN_TO_MILLICELSIUS_WITH_OFFSET(
|
||||
*temperature = deci_kelvin_to_millicelsius_with_offset(
|
||||
tz->trips.critical.temperature,
|
||||
tz->kelvin_offset);
|
||||
return 0;
|
||||
|
@ -692,7 +694,7 @@ static int thermal_get_trend(struct thermal_zone_device *thermal,
|
|||
|
||||
if (type == THERMAL_TRIP_ACTIVE) {
|
||||
int trip_temp;
|
||||
int temp = DECI_KELVIN_TO_MILLICELSIUS_WITH_OFFSET(
|
||||
int temp = deci_kelvin_to_millicelsius_with_offset(
|
||||
tz->temperature, tz->kelvin_offset);
|
||||
if (thermal_get_trip_temp(thermal, trip, &trip_temp))
|
||||
return -EINVAL;
|
||||
|
@ -1043,9 +1045,9 @@ static void acpi_thermal_guess_offset(struct acpi_thermal *tz)
|
|||
{
|
||||
if (tz->trips.critical.flags.valid &&
|
||||
(tz->trips.critical.temperature % 5) == 1)
|
||||
tz->kelvin_offset = 2731;
|
||||
tz->kelvin_offset = 273100;
|
||||
else
|
||||
tz->kelvin_offset = 2732;
|
||||
tz->kelvin_offset = 273200;
|
||||
}
|
||||
|
||||
static void acpi_thermal_check_fn(struct work_struct *work)
|
||||
|
@ -1087,7 +1089,7 @@ static int acpi_thermal_add(struct acpi_device *device)
|
|||
INIT_WORK(&tz->thermal_check_work, acpi_thermal_check_fn);
|
||||
|
||||
pr_info(PREFIX "%s [%s] (%ld C)\n", acpi_device_name(device),
|
||||
acpi_device_bid(device), DECI_KELVIN_TO_CELSIUS(tz->temperature));
|
||||
acpi_device_bid(device), deci_kelvin_to_celsius(tz->temperature));
|
||||
goto end;
|
||||
|
||||
free_memory:
|
||||
|
|
|
@ -70,20 +70,6 @@ void unregister_memory_notifier(struct notifier_block *nb)
|
|||
}
|
||||
EXPORT_SYMBOL(unregister_memory_notifier);
|
||||
|
||||
static ATOMIC_NOTIFIER_HEAD(memory_isolate_chain);
|
||||
|
||||
int register_memory_isolate_notifier(struct notifier_block *nb)
|
||||
{
|
||||
return atomic_notifier_chain_register(&memory_isolate_chain, nb);
|
||||
}
|
||||
EXPORT_SYMBOL(register_memory_isolate_notifier);
|
||||
|
||||
void unregister_memory_isolate_notifier(struct notifier_block *nb)
|
||||
{
|
||||
atomic_notifier_chain_unregister(&memory_isolate_chain, nb);
|
||||
}
|
||||
EXPORT_SYMBOL(unregister_memory_isolate_notifier);
|
||||
|
||||
static void memory_block_release(struct device *dev)
|
||||
{
|
||||
struct memory_block *mem = to_memory_block(dev);
|
||||
|
@ -175,11 +161,6 @@ int memory_notify(unsigned long val, void *v)
|
|||
return blocking_notifier_call_chain(&memory_chain, val, v);
|
||||
}
|
||||
|
||||
int memory_isolate_notify(unsigned long val, void *v)
|
||||
{
|
||||
return atomic_notifier_call_chain(&memory_isolate_chain, val, v);
|
||||
}
|
||||
|
||||
/*
|
||||
* The probe routines leave the pages uninitialized, just as the bootmem code
|
||||
* does. Make sure we do not access them, but instead use only information from
|
||||
|
@ -225,7 +206,7 @@ static bool pages_correctly_probed(unsigned long start_pfn)
|
|||
*/
|
||||
static int
|
||||
memory_block_action(unsigned long start_section_nr, unsigned long action,
|
||||
int online_type)
|
||||
int online_type, int nid)
|
||||
{
|
||||
unsigned long start_pfn;
|
||||
unsigned long nr_pages = PAGES_PER_SECTION * sections_per_block;
|
||||
|
@ -238,7 +219,7 @@ memory_block_action(unsigned long start_section_nr, unsigned long action,
|
|||
if (!pages_correctly_probed(start_pfn))
|
||||
return -EBUSY;
|
||||
|
||||
ret = online_pages(start_pfn, nr_pages, online_type);
|
||||
ret = online_pages(start_pfn, nr_pages, online_type, nid);
|
||||
break;
|
||||
case MEM_OFFLINE:
|
||||
ret = offline_pages(start_pfn, nr_pages);
|
||||
|
@ -264,7 +245,7 @@ static int memory_block_change_state(struct memory_block *mem,
|
|||
mem->state = MEM_GOING_OFFLINE;
|
||||
|
||||
ret = memory_block_action(mem->start_section_nr, to_state,
|
||||
mem->online_type);
|
||||
mem->online_type, mem->nid);
|
||||
|
||||
mem->state = ret ? from_state_req : to_state;
|
||||
|
||||
|
|
|
@ -207,14 +207,17 @@ static inline void zram_fill_page(void *ptr, unsigned long len,
|
|||
|
||||
static bool page_same_filled(void *ptr, unsigned long *element)
|
||||
{
|
||||
unsigned int pos;
|
||||
unsigned long *page;
|
||||
unsigned long val;
|
||||
unsigned int pos, last_pos = PAGE_SIZE / sizeof(*page) - 1;
|
||||
|
||||
page = (unsigned long *)ptr;
|
||||
val = page[0];
|
||||
|
||||
for (pos = 1; pos < PAGE_SIZE / sizeof(*page); pos++) {
|
||||
if (val != page[last_pos])
|
||||
return false;
|
||||
|
||||
for (pos = 1; pos < last_pos; pos++) {
|
||||
if (val != page[pos])
|
||||
return false;
|
||||
}
|
||||
|
@ -626,7 +629,7 @@ static ssize_t writeback_store(struct device *dev,
|
|||
struct bio bio;
|
||||
struct bio_vec bio_vec;
|
||||
struct page *page;
|
||||
ssize_t ret;
|
||||
ssize_t ret = len;
|
||||
int mode;
|
||||
unsigned long blk_idx = 0;
|
||||
|
||||
|
@ -762,7 +765,6 @@ static ssize_t writeback_store(struct device *dev,
|
|||
|
||||
if (blk_idx)
|
||||
free_block_bdev(zram, blk_idx);
|
||||
ret = len;
|
||||
__free_page(page);
|
||||
release_init_lock:
|
||||
up_read(&zram->init_lock);
|
||||
|
|
|
@ -188,8 +188,8 @@ via_free_sg_info(struct pci_dev *pdev, drm_via_sg_info_t *vsg)
|
|||
kfree(vsg->desc_pages);
|
||||
/* fall through */
|
||||
case dr_via_pages_locked:
|
||||
put_user_pages_dirty_lock(vsg->pages, vsg->num_pages,
|
||||
(vsg->direction == DMA_FROM_DEVICE));
|
||||
unpin_user_pages_dirty_lock(vsg->pages, vsg->num_pages,
|
||||
(vsg->direction == DMA_FROM_DEVICE));
|
||||
/* fall through */
|
||||
case dr_via_pages_alloc:
|
||||
vfree(vsg->pages);
|
||||
|
@ -239,7 +239,7 @@ via_lock_all_dma_pages(drm_via_sg_info_t *vsg, drm_via_dmablit_t *xfer)
|
|||
vsg->pages = vzalloc(array_size(sizeof(struct page *), vsg->num_pages));
|
||||
if (NULL == vsg->pages)
|
||||
return -ENOMEM;
|
||||
ret = get_user_pages_fast((unsigned long)xfer->mem_addr,
|
||||
ret = pin_user_pages_fast((unsigned long)xfer->mem_addr,
|
||||
vsg->num_pages,
|
||||
vsg->direction == DMA_FROM_DEVICE ? FOLL_WRITE : 0,
|
||||
vsg->pages);
|
||||
|
|
|
@ -6,6 +6,7 @@
|
|||
#include <linux/log2.h>
|
||||
#include <linux/err.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/units.h>
|
||||
|
||||
#include "qcom-vadc-common.h"
|
||||
|
||||
|
@ -236,8 +237,7 @@ static int qcom_vadc_scale_die_temp(const struct vadc_linear_graph *calib_graph,
|
|||
voltage = 0;
|
||||
}
|
||||
|
||||
voltage -= KELVINMIL_CELSIUSMIL;
|
||||
*result_mdec = voltage;
|
||||
*result_mdec = milli_kelvin_to_millicelsius(voltage);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -325,7 +325,7 @@ static int qcom_vadc_scale_hw_calib_die_temp(
|
|||
{
|
||||
*result_mdec = qcom_vadc_scale_code_voltage_factor(adc_code,
|
||||
prescale, data, 2);
|
||||
*result_mdec -= KELVINMIL_CELSIUSMIL;
|
||||
*result_mdec = milli_kelvin_to_millicelsius(*result_mdec);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -38,7 +38,6 @@
|
|||
#define VADC_AVG_SAMPLES_MAX 512
|
||||
#define ADC5_AVG_SAMPLES_MAX 16
|
||||
|
||||
#define KELVINMIL_CELSIUSMIL 273150
|
||||
#define PMIC5_CHG_TEMP_SCALE_FACTOR 377500
|
||||
#define PMIC5_SMB_TEMP_CONSTANT 419400
|
||||
#define PMIC5_SMB_TEMP_SCALE_FACTOR 356
|
||||
|
|
|
@ -54,7 +54,7 @@ static void __ib_umem_release(struct ib_device *dev, struct ib_umem *umem, int d
|
|||
|
||||
for_each_sg_page(umem->sg_head.sgl, &sg_iter, umem->sg_nents, 0) {
|
||||
page = sg_page_iter_page(&sg_iter);
|
||||
put_user_pages_dirty_lock(&page, 1, umem->writable && dirty);
|
||||
unpin_user_pages_dirty_lock(&page, 1, umem->writable && dirty);
|
||||
}
|
||||
|
||||
sg_free_table(&umem->sg_head);
|
||||
|
@ -257,16 +257,13 @@ struct ib_umem *ib_umem_get(struct ib_device *device, unsigned long addr,
|
|||
sg = umem->sg_head.sgl;
|
||||
|
||||
while (npages) {
|
||||
down_read(&mm->mmap_sem);
|
||||
ret = get_user_pages(cur_base,
|
||||
min_t(unsigned long, npages,
|
||||
PAGE_SIZE / sizeof (struct page *)),
|
||||
gup_flags | FOLL_LONGTERM,
|
||||
page_list, NULL);
|
||||
if (ret < 0) {
|
||||
up_read(&mm->mmap_sem);
|
||||
ret = pin_user_pages_fast(cur_base,
|
||||
min_t(unsigned long, npages,
|
||||
PAGE_SIZE /
|
||||
sizeof(struct page *)),
|
||||
gup_flags | FOLL_LONGTERM, page_list);
|
||||
if (ret < 0)
|
||||
goto umem_release;
|
||||
}
|
||||
|
||||
cur_base += ret * PAGE_SIZE;
|
||||
npages -= ret;
|
||||
|
@ -274,8 +271,6 @@ struct ib_umem *ib_umem_get(struct ib_device *device, unsigned long addr,
|
|||
sg = ib_umem_add_sg_table(sg, page_list, ret,
|
||||
dma_get_max_seg_size(device->dma_device),
|
||||
&umem->sg_nents);
|
||||
|
||||
up_read(&mm->mmap_sem);
|
||||
}
|
||||
|
||||
sg_mark_end(sg);
|
||||
|
|
|
@ -293,9 +293,8 @@ EXPORT_SYMBOL(ib_umem_odp_release);
|
|||
* The function returns -EFAULT if the DMA mapping operation fails. It returns
|
||||
* -EAGAIN if a concurrent invalidation prevents us from updating the page.
|
||||
*
|
||||
* The page is released via put_user_page even if the operation failed. For
|
||||
* on-demand pinning, the page is released whenever it isn't stored in the
|
||||
* umem.
|
||||
* The page is released via put_page even if the operation failed. For on-demand
|
||||
* pinning, the page is released whenever it isn't stored in the umem.
|
||||
*/
|
||||
static int ib_umem_odp_map_dma_single_page(
|
||||
struct ib_umem_odp *umem_odp,
|
||||
|
@ -348,7 +347,7 @@ static int ib_umem_odp_map_dma_single_page(
|
|||
}
|
||||
|
||||
out:
|
||||
put_user_page(page);
|
||||
put_page(page);
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -458,7 +457,7 @@ int ib_umem_odp_map_dma_pages(struct ib_umem_odp *umem_odp, u64 user_virt,
|
|||
ret = -EFAULT;
|
||||
break;
|
||||
}
|
||||
put_user_page(local_page_list[j]);
|
||||
put_page(local_page_list[j]);
|
||||
continue;
|
||||
}
|
||||
|
||||
|
@ -485,8 +484,8 @@ int ib_umem_odp_map_dma_pages(struct ib_umem_odp *umem_odp, u64 user_virt,
|
|||
* ib_umem_odp_map_dma_single_page().
|
||||
*/
|
||||
if (npages - (j + 1) > 0)
|
||||
put_user_pages(&local_page_list[j+1],
|
||||
npages - (j + 1));
|
||||
release_pages(&local_page_list[j+1],
|
||||
npages - (j + 1));
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -106,7 +106,7 @@ int hfi1_acquire_user_pages(struct mm_struct *mm, unsigned long vaddr, size_t np
|
|||
int ret;
|
||||
unsigned int gup_flags = FOLL_LONGTERM | (writable ? FOLL_WRITE : 0);
|
||||
|
||||
ret = get_user_pages_fast(vaddr, npages, gup_flags, pages);
|
||||
ret = pin_user_pages_fast(vaddr, npages, gup_flags, pages);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
|
@ -118,7 +118,7 @@ int hfi1_acquire_user_pages(struct mm_struct *mm, unsigned long vaddr, size_t np
|
|||
void hfi1_release_user_pages(struct mm_struct *mm, struct page **p,
|
||||
size_t npages, bool dirty)
|
||||
{
|
||||
put_user_pages_dirty_lock(p, npages, dirty);
|
||||
unpin_user_pages_dirty_lock(p, npages, dirty);
|
||||
|
||||
if (mm) { /* during close after signal, mm can be NULL */
|
||||
atomic64_sub(npages, &mm->pinned_vm);
|
||||
|
|
|
@ -472,7 +472,7 @@ int mthca_map_user_db(struct mthca_dev *dev, struct mthca_uar *uar,
|
|||
goto out;
|
||||
}
|
||||
|
||||
ret = get_user_pages_fast(uaddr & PAGE_MASK, 1,
|
||||
ret = pin_user_pages_fast(uaddr & PAGE_MASK, 1,
|
||||
FOLL_WRITE | FOLL_LONGTERM, pages);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
@ -482,7 +482,7 @@ int mthca_map_user_db(struct mthca_dev *dev, struct mthca_uar *uar,
|
|||
|
||||
ret = pci_map_sg(dev->pdev, &db_tab->page[i].mem, 1, PCI_DMA_TODEVICE);
|
||||
if (ret < 0) {
|
||||
put_user_page(pages[0]);
|
||||
unpin_user_page(pages[0]);
|
||||
goto out;
|
||||
}
|
||||
|
||||
|
@ -490,7 +490,7 @@ int mthca_map_user_db(struct mthca_dev *dev, struct mthca_uar *uar,
|
|||
mthca_uarc_virt(dev, uar, i));
|
||||
if (ret) {
|
||||
pci_unmap_sg(dev->pdev, &db_tab->page[i].mem, 1, PCI_DMA_TODEVICE);
|
||||
put_user_page(sg_page(&db_tab->page[i].mem));
|
||||
unpin_user_page(sg_page(&db_tab->page[i].mem));
|
||||
goto out;
|
||||
}
|
||||
|
||||
|
@ -556,7 +556,7 @@ void mthca_cleanup_user_db_tab(struct mthca_dev *dev, struct mthca_uar *uar,
|
|||
if (db_tab->page[i].uvirt) {
|
||||
mthca_UNMAP_ICM(dev, mthca_uarc_virt(dev, uar, i), 1);
|
||||
pci_unmap_sg(dev->pdev, &db_tab->page[i].mem, 1, PCI_DMA_TODEVICE);
|
||||
put_user_page(sg_page(&db_tab->page[i].mem));
|
||||
unpin_user_page(sg_page(&db_tab->page[i].mem));
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -40,7 +40,7 @@
|
|||
static void __qib_release_user_pages(struct page **p, size_t num_pages,
|
||||
int dirty)
|
||||
{
|
||||
put_user_pages_dirty_lock(p, num_pages, dirty);
|
||||
unpin_user_pages_dirty_lock(p, num_pages, dirty);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -108,7 +108,7 @@ int qib_get_user_pages(unsigned long start_page, size_t num_pages,
|
|||
|
||||
down_read(¤t->mm->mmap_sem);
|
||||
for (got = 0; got < num_pages; got += ret) {
|
||||
ret = get_user_pages(start_page + got * PAGE_SIZE,
|
||||
ret = pin_user_pages(start_page + got * PAGE_SIZE,
|
||||
num_pages - got,
|
||||
FOLL_LONGTERM | FOLL_WRITE | FOLL_FORCE,
|
||||
p + got, NULL);
|
||||
|
|
|
@ -317,7 +317,7 @@ static int qib_user_sdma_page_to_frags(const struct qib_devdata *dd,
|
|||
* the caller can ignore this page.
|
||||
*/
|
||||
if (put) {
|
||||
put_user_page(page);
|
||||
unpin_user_page(page);
|
||||
} else {
|
||||
/* coalesce case */
|
||||
kunmap(page);
|
||||
|
@ -631,7 +631,7 @@ static void qib_user_sdma_free_pkt_frag(struct device *dev,
|
|||
kunmap(pkt->addr[i].page);
|
||||
|
||||
if (pkt->addr[i].put_page)
|
||||
put_user_page(pkt->addr[i].page);
|
||||
unpin_user_page(pkt->addr[i].page);
|
||||
else
|
||||
__free_page(pkt->addr[i].page);
|
||||
} else if (pkt->addr[i].kvaddr) {
|
||||
|
@ -670,7 +670,7 @@ static int qib_user_sdma_pin_pages(const struct qib_devdata *dd,
|
|||
else
|
||||
j = npages;
|
||||
|
||||
ret = get_user_pages_fast(addr, j, FOLL_LONGTERM, pages);
|
||||
ret = pin_user_pages_fast(addr, j, FOLL_LONGTERM, pages);
|
||||
if (ret != j) {
|
||||
i = 0;
|
||||
j = ret;
|
||||
|
@ -706,7 +706,7 @@ static int qib_user_sdma_pin_pages(const struct qib_devdata *dd,
|
|||
/* if error, return all pages not managed by pkt */
|
||||
free_pages:
|
||||
while (i < j)
|
||||
put_user_page(pages[i++]);
|
||||
unpin_user_page(pages[i++]);
|
||||
|
||||
done:
|
||||
return ret;
|
||||
|
|
|
@ -75,7 +75,7 @@ static void usnic_uiom_put_pages(struct list_head *chunk_list, int dirty)
|
|||
for_each_sg(chunk->page_list, sg, chunk->nents, i) {
|
||||
page = sg_page(sg);
|
||||
pa = sg_phys(sg);
|
||||
put_user_pages_dirty_lock(&page, 1, dirty);
|
||||
unpin_user_pages_dirty_lock(&page, 1, dirty);
|
||||
usnic_dbg("pa: %pa\n", &pa);
|
||||
}
|
||||
kfree(chunk);
|
||||
|
@ -141,7 +141,7 @@ static int usnic_uiom_get_pages(unsigned long addr, size_t size, int writable,
|
|||
ret = 0;
|
||||
|
||||
while (npages) {
|
||||
ret = get_user_pages(cur_base,
|
||||
ret = pin_user_pages(cur_base,
|
||||
min_t(unsigned long, npages,
|
||||
PAGE_SIZE / sizeof(struct page *)),
|
||||
gup_flags | FOLL_LONGTERM,
|
||||
|
|
|
@ -63,7 +63,7 @@ struct siw_mem *siw_mem_id2obj(struct siw_device *sdev, int stag_index)
|
|||
static void siw_free_plist(struct siw_page_chunk *chunk, int num_pages,
|
||||
bool dirty)
|
||||
{
|
||||
put_user_pages_dirty_lock(chunk->plist, num_pages, dirty);
|
||||
unpin_user_pages_dirty_lock(chunk->plist, num_pages, dirty);
|
||||
}
|
||||
|
||||
void siw_umem_release(struct siw_umem *umem, bool dirty)
|
||||
|
@ -426,7 +426,7 @@ struct siw_umem *siw_umem_get(u64 start, u64 len, bool writable)
|
|||
while (nents) {
|
||||
struct page **plist = &umem->page_chunk[i].plist[got];
|
||||
|
||||
rv = get_user_pages(first_page_va, nents,
|
||||
rv = pin_user_pages(first_page_va, nents,
|
||||
foll_flags | FOLL_LONGTERM,
|
||||
plist, NULL);
|
||||
if (rv < 0)
|
||||
|
|
|
@ -183,12 +183,12 @@ static int videobuf_dma_init_user_locked(struct videobuf_dmabuf *dma,
|
|||
dprintk(1, "init user [0x%lx+0x%lx => %d pages]\n",
|
||||
data, size, dma->nr_pages);
|
||||
|
||||
err = get_user_pages(data & PAGE_MASK, dma->nr_pages,
|
||||
err = pin_user_pages(data & PAGE_MASK, dma->nr_pages,
|
||||
flags | FOLL_LONGTERM, dma->pages, NULL);
|
||||
|
||||
if (err != dma->nr_pages) {
|
||||
dma->nr_pages = (err >= 0) ? err : 0;
|
||||
dprintk(1, "get_user_pages: err=%d [%d]\n", err,
|
||||
dprintk(1, "pin_user_pages: err=%d [%d]\n", err,
|
||||
dma->nr_pages);
|
||||
return err < 0 ? err : -EINVAL;
|
||||
}
|
||||
|
@ -349,8 +349,8 @@ int videobuf_dma_free(struct videobuf_dmabuf *dma)
|
|||
BUG_ON(dma->sglen);
|
||||
|
||||
if (dma->pages) {
|
||||
for (i = 0; i < dma->nr_pages; i++)
|
||||
put_page(dma->pages[i]);
|
||||
unpin_user_pages_dirty_lock(dma->pages, dma->nr_pages,
|
||||
dma->direction == DMA_FROM_DEVICE);
|
||||
kfree(dma->pages);
|
||||
dma->pages = NULL;
|
||||
}
|
||||
|
|
|
@ -296,7 +296,6 @@ static inline void bnx2x_dcb_config_qm(struct bnx2x *bp, enum cos_mode mode,
|
|||
* possible, the driver should only write the valid vnics into the internal
|
||||
* ram according to the appropriate port mode.
|
||||
*/
|
||||
#define BITS_TO_BYTES(x) ((x)/8)
|
||||
|
||||
/* CMNG constants, as derived from system spec calculations */
|
||||
|
||||
|
|
|
@ -27,6 +27,7 @@
|
|||
#include <linux/firmware.h>
|
||||
#include <linux/etherdevice.h>
|
||||
#include <linux/if_arp.h>
|
||||
#include <linux/units.h>
|
||||
|
||||
#include <net/mac80211.h>
|
||||
|
||||
|
@ -6468,7 +6469,7 @@ il4965_set_hw_params(struct il_priv *il)
|
|||
il->hw_params.valid_rx_ant = il->cfg->valid_rx_ant;
|
||||
|
||||
il->hw_params.ct_kill_threshold =
|
||||
CELSIUS_TO_KELVIN(CT_KILL_THRESHOLD_LEGACY);
|
||||
celsius_to_kelvin(CT_KILL_THRESHOLD_LEGACY);
|
||||
|
||||
il->hw_params.sens = &il4965_sensitivity;
|
||||
il->hw_params.beacon_time_tsf_bits = IL4965_EXT_BEACON_TIME_POS;
|
||||
|
|
|
@ -17,6 +17,7 @@
|
|||
#include <linux/sched.h>
|
||||
#include <linux/skbuff.h>
|
||||
#include <linux/netdevice.h>
|
||||
#include <linux/units.h>
|
||||
#include <net/mac80211.h>
|
||||
#include <linux/etherdevice.h>
|
||||
#include <asm/unaligned.h>
|
||||
|
@ -1104,7 +1105,7 @@ il4965_fill_txpower_tbl(struct il_priv *il, u8 band, u16 channel, u8 is_ht40,
|
|||
/* get current temperature (Celsius) */
|
||||
current_temp = max(il->temperature, IL_TX_POWER_TEMPERATURE_MIN);
|
||||
current_temp = min(il->temperature, IL_TX_POWER_TEMPERATURE_MAX);
|
||||
current_temp = KELVIN_TO_CELSIUS(current_temp);
|
||||
current_temp = kelvin_to_celsius(current_temp);
|
||||
|
||||
/* select thermal txpower adjustment params, based on channel group
|
||||
* (same frequency group used for mimo txatten adjustment) */
|
||||
|
@ -1610,8 +1611,8 @@ il4965_hw_get_temperature(struct il_priv *il)
|
|||
temperature =
|
||||
(temperature * 97) / 100 + TEMPERATURE_CALIB_KELVIN_OFFSET;
|
||||
|
||||
D_TEMP("Calibrated temperature: %dK, %dC\n", temperature,
|
||||
KELVIN_TO_CELSIUS(temperature));
|
||||
D_TEMP("Calibrated temperature: %dK, %ldC\n", temperature,
|
||||
kelvin_to_celsius(temperature));
|
||||
|
||||
return temperature;
|
||||
}
|
||||
|
@ -1670,12 +1671,12 @@ il4965_temperature_calib(struct il_priv *il)
|
|||
|
||||
if (il->temperature != temp) {
|
||||
if (il->temperature)
|
||||
D_TEMP("Temperature changed " "from %dC to %dC\n",
|
||||
KELVIN_TO_CELSIUS(il->temperature),
|
||||
KELVIN_TO_CELSIUS(temp));
|
||||
D_TEMP("Temperature changed " "from %ldC to %ldC\n",
|
||||
kelvin_to_celsius(il->temperature),
|
||||
kelvin_to_celsius(temp));
|
||||
else
|
||||
D_TEMP("Temperature " "initialized to %dC\n",
|
||||
KELVIN_TO_CELSIUS(temp));
|
||||
D_TEMP("Temperature " "initialized to %ldC\n",
|
||||
kelvin_to_celsius(temp));
|
||||
}
|
||||
|
||||
il->temperature = temp;
|
||||
|
|
|
@ -779,9 +779,6 @@ struct il_sensitivity_ranges {
|
|||
u16 nrg_th_cca;
|
||||
};
|
||||
|
||||
#define KELVIN_TO_CELSIUS(x) ((x)-273)
|
||||
#define CELSIUS_TO_KELVIN(x) ((x)+273)
|
||||
|
||||
/**
|
||||
* struct il_hw_params
|
||||
* @bcast_id: f/w broadcast station ID
|
||||
|
|
|
@ -237,11 +237,6 @@ struct iwl_sensitivity_ranges {
|
|||
u16 nrg_th_cca;
|
||||
};
|
||||
|
||||
|
||||
#define KELVIN_TO_CELSIUS(x) ((x)-273)
|
||||
#define CELSIUS_TO_KELVIN(x) ((x)+273)
|
||||
|
||||
|
||||
/******************************************************************************
|
||||
*
|
||||
* Functions implemented in core module which are forward declared here
|
||||
|
|
|
@ -10,6 +10,8 @@
|
|||
*
|
||||
*****************************************************************************/
|
||||
|
||||
#include <linux/units.h>
|
||||
|
||||
/*
|
||||
* DVM device-specific data & functions
|
||||
*/
|
||||
|
@ -345,7 +347,7 @@ static s32 iwl_temp_calib_to_offset(struct iwl_priv *priv)
|
|||
static void iwl5150_set_ct_threshold(struct iwl_priv *priv)
|
||||
{
|
||||
const s32 volt2temp_coef = IWL_5150_VOLTAGE_TO_TEMPERATURE_COEFF;
|
||||
s32 threshold = (s32)CELSIUS_TO_KELVIN(CT_KILL_THRESHOLD_LEGACY) -
|
||||
s32 threshold = (s32)celsius_to_kelvin(CT_KILL_THRESHOLD_LEGACY) -
|
||||
iwl_temp_calib_to_offset(priv);
|
||||
|
||||
priv->hw_params.ct_kill_threshold = threshold * volt2temp_coef;
|
||||
|
@ -381,7 +383,7 @@ static void iwl5150_temperature(struct iwl_priv *priv)
|
|||
vt = le32_to_cpu(priv->statistics.common.temperature);
|
||||
vt = vt / IWL_5150_VOLTAGE_TO_TEMPERATURE_COEFF + offset;
|
||||
/* now vt hold the temperature in Kelvin */
|
||||
priv->temperature = KELVIN_TO_CELSIUS(vt);
|
||||
priv->temperature = kelvin_to_celsius(vt);
|
||||
iwl_tt_handler(priv);
|
||||
}
|
||||
|
||||
|
|
|
@ -337,13 +337,7 @@ static void pmem_release_disk(void *__pmem)
|
|||
put_disk(pmem->disk);
|
||||
}
|
||||
|
||||
static void pmem_pagemap_page_free(struct page *page)
|
||||
{
|
||||
wake_up_var(&page->_refcount);
|
||||
}
|
||||
|
||||
static const struct dev_pagemap_ops fsdax_pagemap_ops = {
|
||||
.page_free = pmem_pagemap_page_free,
|
||||
.kill = pmem_pagemap_kill,
|
||||
.cleanup = pmem_pagemap_cleanup,
|
||||
};
|
||||
|
|
|
@ -5,14 +5,11 @@
|
|||
*/
|
||||
|
||||
#include <linux/hwmon.h>
|
||||
#include <linux/units.h>
|
||||
#include <asm/unaligned.h>
|
||||
|
||||
#include "nvme.h"
|
||||
|
||||
/* These macros should be moved to linux/temperature.h */
|
||||
#define MILLICELSIUS_TO_KELVIN(t) DIV_ROUND_CLOSEST((t) + 273150, 1000)
|
||||
#define KELVIN_TO_MILLICELSIUS(t) ((t) * 1000L - 273150)
|
||||
|
||||
struct nvme_hwmon_data {
|
||||
struct nvme_ctrl *ctrl;
|
||||
struct nvme_smart_log log;
|
||||
|
@ -35,7 +32,7 @@ static int nvme_get_temp_thresh(struct nvme_ctrl *ctrl, int sensor, bool under,
|
|||
return -EIO;
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
*temp = KELVIN_TO_MILLICELSIUS(status & NVME_TEMP_THRESH_MASK);
|
||||
*temp = kelvin_to_millicelsius(status & NVME_TEMP_THRESH_MASK);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -46,7 +43,7 @@ static int nvme_set_temp_thresh(struct nvme_ctrl *ctrl, int sensor, bool under,
|
|||
unsigned int threshold = sensor << NVME_TEMP_THRESH_SELECT_SHIFT;
|
||||
int ret;
|
||||
|
||||
temp = MILLICELSIUS_TO_KELVIN(temp);
|
||||
temp = millicelsius_to_kelvin(temp);
|
||||
threshold |= clamp_val(temp, 0, NVME_TEMP_THRESH_MASK);
|
||||
|
||||
if (under)
|
||||
|
@ -88,7 +85,7 @@ static int nvme_hwmon_read(struct device *dev, enum hwmon_sensor_types type,
|
|||
case hwmon_temp_min:
|
||||
return nvme_get_temp_thresh(data->ctrl, channel, true, val);
|
||||
case hwmon_temp_crit:
|
||||
*val = KELVIN_TO_MILLICELSIUS(data->ctrl->cctemp);
|
||||
*val = kelvin_to_millicelsius(data->ctrl->cctemp);
|
||||
return 0;
|
||||
default:
|
||||
break;
|
||||
|
@ -105,7 +102,7 @@ static int nvme_hwmon_read(struct device *dev, enum hwmon_sensor_types type,
|
|||
temp = get_unaligned_le16(log->temperature);
|
||||
else
|
||||
temp = le16_to_cpu(log->temp_sensor[channel - 1]);
|
||||
*val = KELVIN_TO_MILLICELSIUS(temp);
|
||||
*val = kelvin_to_millicelsius(temp);
|
||||
break;
|
||||
case hwmon_temp_alarm:
|
||||
*val = !!(log->critical_warning & NVME_SMART_CRIT_TEMPERATURE);
|
||||
|
|
|
@ -257,12 +257,12 @@ static int goldfish_pipe_error_convert(int status)
|
|||
}
|
||||
}
|
||||
|
||||
static int pin_user_pages(unsigned long first_page,
|
||||
unsigned long last_page,
|
||||
unsigned int last_page_size,
|
||||
int is_write,
|
||||
struct page *pages[MAX_BUFFERS_PER_COMMAND],
|
||||
unsigned int *iter_last_page_size)
|
||||
static int goldfish_pin_pages(unsigned long first_page,
|
||||
unsigned long last_page,
|
||||
unsigned int last_page_size,
|
||||
int is_write,
|
||||
struct page *pages[MAX_BUFFERS_PER_COMMAND],
|
||||
unsigned int *iter_last_page_size)
|
||||
{
|
||||
int ret;
|
||||
int requested_pages = ((last_page - first_page) >> PAGE_SHIFT) + 1;
|
||||
|
@ -274,7 +274,7 @@ static int pin_user_pages(unsigned long first_page,
|
|||
*iter_last_page_size = last_page_size;
|
||||
}
|
||||
|
||||
ret = get_user_pages_fast(first_page, requested_pages,
|
||||
ret = pin_user_pages_fast(first_page, requested_pages,
|
||||
!is_write ? FOLL_WRITE : 0,
|
||||
pages);
|
||||
if (ret <= 0)
|
||||
|
@ -285,18 +285,6 @@ static int pin_user_pages(unsigned long first_page,
|
|||
return ret;
|
||||
}
|
||||
|
||||
static void release_user_pages(struct page **pages, int pages_count,
|
||||
int is_write, s32 consumed_size)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < pages_count; i++) {
|
||||
if (!is_write && consumed_size > 0)
|
||||
set_page_dirty(pages[i]);
|
||||
put_page(pages[i]);
|
||||
}
|
||||
}
|
||||
|
||||
/* Populate the call parameters, merging adjacent pages together */
|
||||
static void populate_rw_params(struct page **pages,
|
||||
int pages_count,
|
||||
|
@ -354,9 +342,9 @@ static int transfer_max_buffers(struct goldfish_pipe *pipe,
|
|||
if (mutex_lock_interruptible(&pipe->lock))
|
||||
return -ERESTARTSYS;
|
||||
|
||||
pages_count = pin_user_pages(first_page, last_page,
|
||||
last_page_size, is_write,
|
||||
pipe->pages, &iter_last_page_size);
|
||||
pages_count = goldfish_pin_pages(first_page, last_page,
|
||||
last_page_size, is_write,
|
||||
pipe->pages, &iter_last_page_size);
|
||||
if (pages_count < 0) {
|
||||
mutex_unlock(&pipe->lock);
|
||||
return pages_count;
|
||||
|
@ -372,7 +360,8 @@ static int transfer_max_buffers(struct goldfish_pipe *pipe,
|
|||
|
||||
*consumed_size = pipe->command_buffer->rw_params.consumed_size;
|
||||
|
||||
release_user_pages(pipe->pages, pages_count, is_write, *consumed_size);
|
||||
unpin_user_pages_dirty_lock(pipe->pages, pages_count,
|
||||
!is_write && *consumed_size > 0);
|
||||
|
||||
mutex_unlock(&pipe->lock);
|
||||
return 0;
|
||||
|
|
|
@ -33,9 +33,9 @@
|
|||
#include <linux/seq_file.h>
|
||||
#include <linux/platform_data/x86/asus-wmi.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/thermal.h>
|
||||
#include <linux/acpi.h>
|
||||
#include <linux/dmi.h>
|
||||
#include <linux/units.h>
|
||||
|
||||
#include <acpi/battery.h>
|
||||
#include <acpi/video.h>
|
||||
|
@ -1514,9 +1514,8 @@ static ssize_t asus_hwmon_temp1(struct device *dev,
|
|||
if (err < 0)
|
||||
return err;
|
||||
|
||||
value = DECI_KELVIN_TO_CELSIUS((value & 0xFFFF)) * 1000;
|
||||
|
||||
return sprintf(buf, "%d\n", value);
|
||||
return sprintf(buf, "%ld\n",
|
||||
deci_kelvin_to_millicelsius(value & 0xFFFF));
|
||||
}
|
||||
|
||||
/* Fan1 */
|
||||
|
|
|
@ -22,6 +22,7 @@
|
|||
#include <linux/slab.h>
|
||||
#include <linux/thermal.h>
|
||||
#include <linux/types.h>
|
||||
#include <linux/units.h>
|
||||
|
||||
MODULE_AUTHOR("Thomas Sujith");
|
||||
MODULE_AUTHOR("Zhang Rui");
|
||||
|
@ -302,8 +303,10 @@ static ssize_t aux_show(struct device *dev, struct device_attribute *dev_attr,
|
|||
int result;
|
||||
|
||||
result = sensor_get_auxtrip(attr->handle, idx, &value);
|
||||
if (result)
|
||||
return result;
|
||||
|
||||
return result ? result : sprintf(buf, "%lu", DECI_KELVIN_TO_CELSIUS(value));
|
||||
return sprintf(buf, "%lu", deci_kelvin_to_celsius(value));
|
||||
}
|
||||
|
||||
static ssize_t aux0_show(struct device *dev,
|
||||
|
@ -332,8 +335,8 @@ static ssize_t aux_store(struct device *dev, struct device_attribute *dev_attr,
|
|||
if (value < 0)
|
||||
return -EINVAL;
|
||||
|
||||
result = sensor_set_auxtrip(attr->handle, idx,
|
||||
CELSIUS_TO_DECI_KELVIN(value));
|
||||
result = sensor_set_auxtrip(attr->handle, idx,
|
||||
celsius_to_deci_kelvin(value));
|
||||
return result ? result : count;
|
||||
}
|
||||
|
||||
|
|
|
@ -21,8 +21,6 @@
|
|||
|
||||
#include "thermal_core.h"
|
||||
|
||||
#define TO_MCELSIUS(c) ((c) * 1000)
|
||||
|
||||
/* Thermal Manager Control and Status Register */
|
||||
#define PMU_TDC0_SW_RST_MASK (0x1 << 1)
|
||||
#define PMU_TM_DISABLE_OFFS 0
|
||||
|
|
|
@ -8,6 +8,7 @@
|
|||
#include <linux/init.h>
|
||||
#include <linux/acpi.h>
|
||||
#include <linux/thermal.h>
|
||||
#include <linux/units.h>
|
||||
#include "int340x_thermal_zone.h"
|
||||
|
||||
static int int340x_thermal_get_zone_temp(struct thermal_zone_device *zone,
|
||||
|
@ -34,7 +35,7 @@ static int int340x_thermal_get_zone_temp(struct thermal_zone_device *zone,
|
|||
*temp = (unsigned long)conv_temp * 10;
|
||||
} else
|
||||
/* _TMP returns the temperature in tenths of degrees Kelvin */
|
||||
*temp = DECI_KELVIN_TO_MILLICELSIUS(tmp);
|
||||
*temp = deci_kelvin_to_millicelsius(tmp);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -116,7 +117,7 @@ static int int340x_thermal_set_trip_temp(struct thermal_zone_device *zone,
|
|||
|
||||
snprintf(name, sizeof(name), "PAT%d", trip);
|
||||
status = acpi_execute_simple_method(d->adev->handle, name,
|
||||
MILLICELSIUS_TO_DECI_KELVIN(temp));
|
||||
millicelsius_to_deci_kelvin(temp));
|
||||
if (ACPI_FAILURE(status))
|
||||
return -EIO;
|
||||
|
||||
|
@ -163,7 +164,7 @@ static int int340x_thermal_get_trip_config(acpi_handle handle, char *name,
|
|||
if (ACPI_FAILURE(status))
|
||||
return -EIO;
|
||||
|
||||
*temp = DECI_KELVIN_TO_MILLICELSIUS(r);
|
||||
*temp = deci_kelvin_to_millicelsius(r);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -13,6 +13,7 @@
|
|||
#include <linux/pci.h>
|
||||
#include <linux/acpi.h>
|
||||
#include <linux/thermal.h>
|
||||
#include <linux/units.h>
|
||||
#include <linux/pm.h>
|
||||
|
||||
/* Intel PCH thermal Device IDs */
|
||||
|
@ -93,7 +94,7 @@ static void pch_wpt_add_acpi_psv_trip(struct pch_thermal_device *ptd,
|
|||
if (ACPI_SUCCESS(status)) {
|
||||
unsigned long trip_temp;
|
||||
|
||||
trip_temp = DECI_KELVIN_TO_MILLICELSIUS(r);
|
||||
trip_temp = deci_kelvin_to_millicelsius(r);
|
||||
if (trip_temp) {
|
||||
ptd->psv_temp = trip_temp;
|
||||
ptd->psv_trip_id = *nr_trips;
|
||||
|
|
|
@ -309,9 +309,8 @@ static int put_pfn(unsigned long pfn, int prot)
|
|||
{
|
||||
if (!is_invalid_reserved_pfn(pfn)) {
|
||||
struct page *page = pfn_to_page(pfn);
|
||||
if (prot & IOMMU_WRITE)
|
||||
SetPageDirty(page);
|
||||
put_page(page);
|
||||
|
||||
unpin_user_pages_dirty_lock(&page, 1, prot & IOMMU_WRITE);
|
||||
return 1;
|
||||
}
|
||||
return 0;
|
||||
|
@ -322,7 +321,6 @@ static int vaddr_get_pfn(struct mm_struct *mm, unsigned long vaddr,
|
|||
{
|
||||
struct page *page[1];
|
||||
struct vm_area_struct *vma;
|
||||
struct vm_area_struct *vmas[1];
|
||||
unsigned int flags = 0;
|
||||
int ret;
|
||||
|
||||
|
@ -330,33 +328,14 @@ static int vaddr_get_pfn(struct mm_struct *mm, unsigned long vaddr,
|
|||
flags |= FOLL_WRITE;
|
||||
|
||||
down_read(&mm->mmap_sem);
|
||||
if (mm == current->mm) {
|
||||
ret = get_user_pages(vaddr, 1, flags | FOLL_LONGTERM, page,
|
||||
vmas);
|
||||
} else {
|
||||
ret = get_user_pages_remote(NULL, mm, vaddr, 1, flags, page,
|
||||
vmas, NULL);
|
||||
/*
|
||||
* The lifetime of a vaddr_get_pfn() page pin is
|
||||
* userspace-controlled. In the fs-dax case this could
|
||||
* lead to indefinite stalls in filesystem operations.
|
||||
* Disallow attempts to pin fs-dax pages via this
|
||||
* interface.
|
||||
*/
|
||||
if (ret > 0 && vma_is_fsdax(vmas[0])) {
|
||||
ret = -EOPNOTSUPP;
|
||||
put_page(page[0]);
|
||||
}
|
||||
}
|
||||
up_read(&mm->mmap_sem);
|
||||
|
||||
ret = pin_user_pages_remote(NULL, mm, vaddr, 1, flags | FOLL_LONGTERM,
|
||||
page, NULL, NULL);
|
||||
if (ret == 1) {
|
||||
*pfn = page_to_pfn(page[0]);
|
||||
return 0;
|
||||
ret = 0;
|
||||
goto done;
|
||||
}
|
||||
|
||||
down_read(&mm->mmap_sem);
|
||||
|
||||
vaddr = untagged_addr(vaddr);
|
||||
|
||||
vma = find_vma_intersection(mm, vaddr, vaddr + 1);
|
||||
|
@ -366,7 +345,7 @@ static int vaddr_get_pfn(struct mm_struct *mm, unsigned long vaddr,
|
|||
if (is_invalid_reserved_pfn(*pfn))
|
||||
ret = 0;
|
||||
}
|
||||
|
||||
done:
|
||||
up_read(&mm->mmap_sem);
|
||||
return ret;
|
||||
}
|
||||
|
|
144
fs/binfmt_elf.c
144
fs/binfmt_elf.c
|
@ -97,7 +97,7 @@ static struct linux_binfmt elf_format = {
|
|||
.min_coredump = ELF_EXEC_PAGESIZE,
|
||||
};
|
||||
|
||||
#define BAD_ADDR(x) ((unsigned long)(x) >= TASK_SIZE)
|
||||
#define BAD_ADDR(x) (unlikely((unsigned long)(x) >= TASK_SIZE))
|
||||
|
||||
static int set_brk(unsigned long start, unsigned long end, int prot)
|
||||
{
|
||||
|
@ -161,9 +161,11 @@ static int padzero(unsigned long elf_bss)
|
|||
#endif
|
||||
|
||||
static int
|
||||
create_elf_tables(struct linux_binprm *bprm, struct elfhdr *exec,
|
||||
unsigned long load_addr, unsigned long interp_load_addr)
|
||||
create_elf_tables(struct linux_binprm *bprm, const struct elfhdr *exec,
|
||||
unsigned long load_addr, unsigned long interp_load_addr,
|
||||
unsigned long e_entry)
|
||||
{
|
||||
struct mm_struct *mm = current->mm;
|
||||
unsigned long p = bprm->p;
|
||||
int argc = bprm->argc;
|
||||
int envc = bprm->envc;
|
||||
|
@ -176,7 +178,7 @@ create_elf_tables(struct linux_binprm *bprm, struct elfhdr *exec,
|
|||
unsigned char k_rand_bytes[16];
|
||||
int items;
|
||||
elf_addr_t *elf_info;
|
||||
int ei_index = 0;
|
||||
int ei_index;
|
||||
const struct cred *cred = current_cred();
|
||||
struct vm_area_struct *vma;
|
||||
|
||||
|
@ -226,12 +228,12 @@ create_elf_tables(struct linux_binprm *bprm, struct elfhdr *exec,
|
|||
return -EFAULT;
|
||||
|
||||
/* Create the ELF interpreter info */
|
||||
elf_info = (elf_addr_t *)current->mm->saved_auxv;
|
||||
elf_info = (elf_addr_t *)mm->saved_auxv;
|
||||
/* update AT_VECTOR_SIZE_BASE if the number of NEW_AUX_ENT() changes */
|
||||
#define NEW_AUX_ENT(id, val) \
|
||||
do { \
|
||||
elf_info[ei_index++] = id; \
|
||||
elf_info[ei_index++] = val; \
|
||||
*elf_info++ = id; \
|
||||
*elf_info++ = val; \
|
||||
} while (0)
|
||||
|
||||
#ifdef ARCH_DLINFO
|
||||
|
@ -251,7 +253,7 @@ create_elf_tables(struct linux_binprm *bprm, struct elfhdr *exec,
|
|||
NEW_AUX_ENT(AT_PHNUM, exec->e_phnum);
|
||||
NEW_AUX_ENT(AT_BASE, interp_load_addr);
|
||||
NEW_AUX_ENT(AT_FLAGS, 0);
|
||||
NEW_AUX_ENT(AT_ENTRY, exec->e_entry);
|
||||
NEW_AUX_ENT(AT_ENTRY, e_entry);
|
||||
NEW_AUX_ENT(AT_UID, from_kuid_munged(cred->user_ns, cred->uid));
|
||||
NEW_AUX_ENT(AT_EUID, from_kuid_munged(cred->user_ns, cred->euid));
|
||||
NEW_AUX_ENT(AT_GID, from_kgid_munged(cred->user_ns, cred->gid));
|
||||
|
@ -275,12 +277,13 @@ create_elf_tables(struct linux_binprm *bprm, struct elfhdr *exec,
|
|||
}
|
||||
#undef NEW_AUX_ENT
|
||||
/* AT_NULL is zero; clear the rest too */
|
||||
memset(&elf_info[ei_index], 0,
|
||||
sizeof current->mm->saved_auxv - ei_index * sizeof elf_info[0]);
|
||||
memset(elf_info, 0, (char *)mm->saved_auxv +
|
||||
sizeof(mm->saved_auxv) - (char *)elf_info);
|
||||
|
||||
/* And advance past the AT_NULL entry. */
|
||||
ei_index += 2;
|
||||
elf_info += 2;
|
||||
|
||||
ei_index = elf_info - (elf_addr_t *)mm->saved_auxv;
|
||||
sp = STACK_ADD(p, ei_index);
|
||||
|
||||
items = (argc + 1) + (envc + 1) + 1;
|
||||
|
@ -299,7 +302,7 @@ create_elf_tables(struct linux_binprm *bprm, struct elfhdr *exec,
|
|||
* Grow the stack manually; some architectures have a limit on how
|
||||
* far ahead a user-space access may be in order to grow the stack.
|
||||
*/
|
||||
vma = find_extend_vma(current->mm, bprm->p);
|
||||
vma = find_extend_vma(mm, bprm->p);
|
||||
if (!vma)
|
||||
return -EFAULT;
|
||||
|
||||
|
@ -308,7 +311,7 @@ create_elf_tables(struct linux_binprm *bprm, struct elfhdr *exec,
|
|||
return -EFAULT;
|
||||
|
||||
/* Populate list of argv pointers back to argv strings. */
|
||||
p = current->mm->arg_end = current->mm->arg_start;
|
||||
p = mm->arg_end = mm->arg_start;
|
||||
while (argc-- > 0) {
|
||||
size_t len;
|
||||
if (__put_user((elf_addr_t)p, sp++))
|
||||
|
@ -320,10 +323,10 @@ create_elf_tables(struct linux_binprm *bprm, struct elfhdr *exec,
|
|||
}
|
||||
if (__put_user(0, sp++))
|
||||
return -EFAULT;
|
||||
current->mm->arg_end = p;
|
||||
mm->arg_end = p;
|
||||
|
||||
/* Populate list of envp pointers back to envp strings. */
|
||||
current->mm->env_end = current->mm->env_start = p;
|
||||
mm->env_end = mm->env_start = p;
|
||||
while (envc-- > 0) {
|
||||
size_t len;
|
||||
if (__put_user((elf_addr_t)p, sp++))
|
||||
|
@ -335,10 +338,10 @@ create_elf_tables(struct linux_binprm *bprm, struct elfhdr *exec,
|
|||
}
|
||||
if (__put_user(0, sp++))
|
||||
return -EFAULT;
|
||||
current->mm->env_end = p;
|
||||
mm->env_end = p;
|
||||
|
||||
/* Put the elf_info on the stack in the right place. */
|
||||
if (copy_to_user(sp, elf_info, ei_index * sizeof(elf_addr_t)))
|
||||
if (copy_to_user(sp, mm->saved_auxv, ei_index * sizeof(elf_addr_t)))
|
||||
return -EFAULT;
|
||||
return 0;
|
||||
}
|
||||
|
@ -689,15 +692,17 @@ static int load_elf_binary(struct linux_binprm *bprm)
|
|||
int bss_prot = 0;
|
||||
int retval, i;
|
||||
unsigned long elf_entry;
|
||||
unsigned long e_entry;
|
||||
unsigned long interp_load_addr = 0;
|
||||
unsigned long start_code, end_code, start_data, end_data;
|
||||
unsigned long reloc_func_desc __maybe_unused = 0;
|
||||
int executable_stack = EXSTACK_DEFAULT;
|
||||
struct elfhdr *elf_ex = (struct elfhdr *)bprm->buf;
|
||||
struct {
|
||||
struct elfhdr elf_ex;
|
||||
struct elfhdr interp_elf_ex;
|
||||
} *loc;
|
||||
struct arch_elf_state arch_state = INIT_ARCH_ELF_STATE;
|
||||
struct mm_struct *mm;
|
||||
struct pt_regs *regs;
|
||||
|
||||
loc = kmalloc(sizeof(*loc), GFP_KERNEL);
|
||||
|
@ -705,30 +710,27 @@ static int load_elf_binary(struct linux_binprm *bprm)
|
|||
retval = -ENOMEM;
|
||||
goto out_ret;
|
||||
}
|
||||
|
||||
/* Get the exec-header */
|
||||
loc->elf_ex = *((struct elfhdr *)bprm->buf);
|
||||
|
||||
retval = -ENOEXEC;
|
||||
/* First of all, some simple consistency checks */
|
||||
if (memcmp(loc->elf_ex.e_ident, ELFMAG, SELFMAG) != 0)
|
||||
if (memcmp(elf_ex->e_ident, ELFMAG, SELFMAG) != 0)
|
||||
goto out;
|
||||
|
||||
if (loc->elf_ex.e_type != ET_EXEC && loc->elf_ex.e_type != ET_DYN)
|
||||
if (elf_ex->e_type != ET_EXEC && elf_ex->e_type != ET_DYN)
|
||||
goto out;
|
||||
if (!elf_check_arch(&loc->elf_ex))
|
||||
if (!elf_check_arch(elf_ex))
|
||||
goto out;
|
||||
if (elf_check_fdpic(&loc->elf_ex))
|
||||
if (elf_check_fdpic(elf_ex))
|
||||
goto out;
|
||||
if (!bprm->file->f_op->mmap)
|
||||
goto out;
|
||||
|
||||
elf_phdata = load_elf_phdrs(&loc->elf_ex, bprm->file);
|
||||
elf_phdata = load_elf_phdrs(elf_ex, bprm->file);
|
||||
if (!elf_phdata)
|
||||
goto out;
|
||||
|
||||
elf_ppnt = elf_phdata;
|
||||
for (i = 0; i < loc->elf_ex.e_phnum; i++, elf_ppnt++) {
|
||||
for (i = 0; i < elf_ex->e_phnum; i++, elf_ppnt++) {
|
||||
char *elf_interpreter;
|
||||
|
||||
if (elf_ppnt->p_type != PT_INTERP)
|
||||
|
@ -782,7 +784,7 @@ static int load_elf_binary(struct linux_binprm *bprm)
|
|||
}
|
||||
|
||||
elf_ppnt = elf_phdata;
|
||||
for (i = 0; i < loc->elf_ex.e_phnum; i++, elf_ppnt++)
|
||||
for (i = 0; i < elf_ex->e_phnum; i++, elf_ppnt++)
|
||||
switch (elf_ppnt->p_type) {
|
||||
case PT_GNU_STACK:
|
||||
if (elf_ppnt->p_flags & PF_X)
|
||||
|
@ -792,7 +794,7 @@ static int load_elf_binary(struct linux_binprm *bprm)
|
|||
break;
|
||||
|
||||
case PT_LOPROC ... PT_HIPROC:
|
||||
retval = arch_elf_pt_proc(&loc->elf_ex, elf_ppnt,
|
||||
retval = arch_elf_pt_proc(elf_ex, elf_ppnt,
|
||||
bprm->file, false,
|
||||
&arch_state);
|
||||
if (retval)
|
||||
|
@ -836,7 +838,7 @@ static int load_elf_binary(struct linux_binprm *bprm)
|
|||
* still possible to return an error to the code that invoked
|
||||
* the exec syscall.
|
||||
*/
|
||||
retval = arch_check_elf(&loc->elf_ex,
|
||||
retval = arch_check_elf(elf_ex,
|
||||
!!interpreter, &loc->interp_elf_ex,
|
||||
&arch_state);
|
||||
if (retval)
|
||||
|
@ -849,8 +851,8 @@ static int load_elf_binary(struct linux_binprm *bprm)
|
|||
|
||||
/* Do this immediately, since STACK_TOP as used in setup_arg_pages
|
||||
may depend on the personality. */
|
||||
SET_PERSONALITY2(loc->elf_ex, &arch_state);
|
||||
if (elf_read_implies_exec(loc->elf_ex, executable_stack))
|
||||
SET_PERSONALITY2(*elf_ex, &arch_state);
|
||||
if (elf_read_implies_exec(*elf_ex, executable_stack))
|
||||
current->personality |= READ_IMPLIES_EXEC;
|
||||
|
||||
if (!(current->personality & ADDR_NO_RANDOMIZE) && randomize_va_space)
|
||||
|
@ -877,7 +879,7 @@ static int load_elf_binary(struct linux_binprm *bprm)
|
|||
/* Now we do a little grungy work by mmapping the ELF image into
|
||||
the correct location in memory. */
|
||||
for(i = 0, elf_ppnt = elf_phdata;
|
||||
i < loc->elf_ex.e_phnum; i++, elf_ppnt++) {
|
||||
i < elf_ex->e_phnum; i++, elf_ppnt++) {
|
||||
int elf_prot, elf_flags;
|
||||
unsigned long k, vaddr;
|
||||
unsigned long total_size = 0;
|
||||
|
@ -921,9 +923,9 @@ static int load_elf_binary(struct linux_binprm *bprm)
|
|||
* If we are loading ET_EXEC or we have already performed
|
||||
* the ET_DYN load_addr calculations, proceed normally.
|
||||
*/
|
||||
if (loc->elf_ex.e_type == ET_EXEC || load_addr_set) {
|
||||
if (elf_ex->e_type == ET_EXEC || load_addr_set) {
|
||||
elf_flags |= MAP_FIXED;
|
||||
} else if (loc->elf_ex.e_type == ET_DYN) {
|
||||
} else if (elf_ex->e_type == ET_DYN) {
|
||||
/*
|
||||
* This logic is run once for the first LOAD Program
|
||||
* Header for ET_DYN binaries to calculate the
|
||||
|
@ -972,7 +974,7 @@ static int load_elf_binary(struct linux_binprm *bprm)
|
|||
load_bias = ELF_PAGESTART(load_bias - vaddr);
|
||||
|
||||
total_size = total_mapping_size(elf_phdata,
|
||||
loc->elf_ex.e_phnum);
|
||||
elf_ex->e_phnum);
|
||||
if (!total_size) {
|
||||
retval = -EINVAL;
|
||||
goto out_free_dentry;
|
||||
|
@ -990,7 +992,7 @@ static int load_elf_binary(struct linux_binprm *bprm)
|
|||
if (!load_addr_set) {
|
||||
load_addr_set = 1;
|
||||
load_addr = (elf_ppnt->p_vaddr - elf_ppnt->p_offset);
|
||||
if (loc->elf_ex.e_type == ET_DYN) {
|
||||
if (elf_ex->e_type == ET_DYN) {
|
||||
load_bias += error -
|
||||
ELF_PAGESTART(load_bias + vaddr);
|
||||
load_addr += load_bias;
|
||||
|
@ -998,7 +1000,7 @@ static int load_elf_binary(struct linux_binprm *bprm)
|
|||
}
|
||||
}
|
||||
k = elf_ppnt->p_vaddr;
|
||||
if (k < start_code)
|
||||
if ((elf_ppnt->p_flags & PF_X) && k < start_code)
|
||||
start_code = k;
|
||||
if (start_data < k)
|
||||
start_data = k;
|
||||
|
@ -1031,7 +1033,7 @@ static int load_elf_binary(struct linux_binprm *bprm)
|
|||
}
|
||||
}
|
||||
|
||||
loc->elf_ex.e_entry += load_bias;
|
||||
e_entry = elf_ex->e_entry + load_bias;
|
||||
elf_bss += load_bias;
|
||||
elf_brk += load_bias;
|
||||
start_code += load_bias;
|
||||
|
@ -1074,7 +1076,7 @@ static int load_elf_binary(struct linux_binprm *bprm)
|
|||
allow_write_access(interpreter);
|
||||
fput(interpreter);
|
||||
} else {
|
||||
elf_entry = loc->elf_ex.e_entry;
|
||||
elf_entry = e_entry;
|
||||
if (BAD_ADDR(elf_entry)) {
|
||||
retval = -EINVAL;
|
||||
goto out_free_dentry;
|
||||
|
@ -1092,15 +1094,17 @@ static int load_elf_binary(struct linux_binprm *bprm)
|
|||
goto out;
|
||||
#endif /* ARCH_HAS_SETUP_ADDITIONAL_PAGES */
|
||||
|
||||
retval = create_elf_tables(bprm, &loc->elf_ex,
|
||||
load_addr, interp_load_addr);
|
||||
retval = create_elf_tables(bprm, elf_ex,
|
||||
load_addr, interp_load_addr, e_entry);
|
||||
if (retval < 0)
|
||||
goto out;
|
||||
current->mm->end_code = end_code;
|
||||
current->mm->start_code = start_code;
|
||||
current->mm->start_data = start_data;
|
||||
current->mm->end_data = end_data;
|
||||
current->mm->start_stack = bprm->p;
|
||||
|
||||
mm = current->mm;
|
||||
mm->end_code = end_code;
|
||||
mm->start_code = start_code;
|
||||
mm->start_data = start_data;
|
||||
mm->end_data = end_data;
|
||||
mm->start_stack = bprm->p;
|
||||
|
||||
if ((current->flags & PF_RANDOMIZE) && (randomize_va_space > 1)) {
|
||||
/*
|
||||
|
@ -1111,12 +1115,11 @@ static int load_elf_binary(struct linux_binprm *bprm)
|
|||
* growing down), and into the unused ELF_ET_DYN_BASE region.
|
||||
*/
|
||||
if (IS_ENABLED(CONFIG_ARCH_HAS_ELF_RANDOMIZE) &&
|
||||
loc->elf_ex.e_type == ET_DYN && !interpreter)
|
||||
current->mm->brk = current->mm->start_brk =
|
||||
ELF_ET_DYN_BASE;
|
||||
elf_ex->e_type == ET_DYN && !interpreter) {
|
||||
mm->brk = mm->start_brk = ELF_ET_DYN_BASE;
|
||||
}
|
||||
|
||||
current->mm->brk = current->mm->start_brk =
|
||||
arch_randomize_brk(current->mm);
|
||||
mm->brk = mm->start_brk = arch_randomize_brk(mm);
|
||||
#ifdef compat_brk_randomized
|
||||
current->brk_randomized = 1;
|
||||
#endif
|
||||
|
@ -1574,6 +1577,7 @@ static void fill_siginfo_note(struct memelfnote *note, user_siginfo_t *csigdata,
|
|||
*/
|
||||
static int fill_files_note(struct memelfnote *note)
|
||||
{
|
||||
struct mm_struct *mm = current->mm;
|
||||
struct vm_area_struct *vma;
|
||||
unsigned count, size, names_ofs, remaining, n;
|
||||
user_long_t *data;
|
||||
|
@ -1581,7 +1585,7 @@ static int fill_files_note(struct memelfnote *note)
|
|||
char *name_base, *name_curpos;
|
||||
|
||||
/* *Estimated* file count and total data size needed */
|
||||
count = current->mm->map_count;
|
||||
count = mm->map_count;
|
||||
if (count > UINT_MAX / 64)
|
||||
return -EINVAL;
|
||||
size = count * 64;
|
||||
|
@ -1591,6 +1595,10 @@ static int fill_files_note(struct memelfnote *note)
|
|||
if (size >= MAX_FILE_NOTE_SIZE) /* paranoia check */
|
||||
return -EINVAL;
|
||||
size = round_up(size, PAGE_SIZE);
|
||||
/*
|
||||
* "size" can be 0 here legitimately.
|
||||
* Let it ENOMEM and omit NT_FILE section which will be empty anyway.
|
||||
*/
|
||||
data = kvmalloc(size, GFP_KERNEL);
|
||||
if (ZERO_OR_NULL_PTR(data))
|
||||
return -ENOMEM;
|
||||
|
@ -1599,7 +1607,7 @@ static int fill_files_note(struct memelfnote *note)
|
|||
name_base = name_curpos = ((char *)data) + names_ofs;
|
||||
remaining = size - names_ofs;
|
||||
count = 0;
|
||||
for (vma = current->mm->mmap; vma != NULL; vma = vma->vm_next) {
|
||||
for (vma = mm->mmap; vma != NULL; vma = vma->vm_next) {
|
||||
struct file *file;
|
||||
const char *filename;
|
||||
|
||||
|
@ -1633,10 +1641,10 @@ static int fill_files_note(struct memelfnote *note)
|
|||
data[0] = count;
|
||||
data[1] = PAGE_SIZE;
|
||||
/*
|
||||
* Count usually is less than current->mm->map_count,
|
||||
* Count usually is less than mm->map_count,
|
||||
* we need to move filenames down.
|
||||
*/
|
||||
n = current->mm->map_count - count;
|
||||
n = mm->map_count - count;
|
||||
if (n != 0) {
|
||||
unsigned shift_bytes = n * 3 * sizeof(data[0]);
|
||||
memmove(name_base - shift_bytes, name_base,
|
||||
|
@ -2182,7 +2190,7 @@ static int elf_core_dump(struct coredump_params *cprm)
|
|||
int segs, i;
|
||||
size_t vma_data_size = 0;
|
||||
struct vm_area_struct *vma, *gate_vma;
|
||||
struct elfhdr *elf = NULL;
|
||||
struct elfhdr elf;
|
||||
loff_t offset = 0, dataoff;
|
||||
struct elf_note_info info = { };
|
||||
struct elf_phdr *phdr4note = NULL;
|
||||
|
@ -2203,10 +2211,6 @@ static int elf_core_dump(struct coredump_params *cprm)
|
|||
* exists while dumping the mm->vm_next areas to the core file.
|
||||
*/
|
||||
|
||||
/* alloc memory for large data structures: too large to be on stack */
|
||||
elf = kmalloc(sizeof(*elf), GFP_KERNEL);
|
||||
if (!elf)
|
||||
goto out;
|
||||
/*
|
||||
* The number of segs are recored into ELF header as 16bit value.
|
||||
* Please check DEFAULT_MAX_MAP_COUNT definition when you modify here.
|
||||
|
@ -2230,7 +2234,7 @@ static int elf_core_dump(struct coredump_params *cprm)
|
|||
* Collect all the non-memory information about the process for the
|
||||
* notes. This also sets up the file header.
|
||||
*/
|
||||
if (!fill_note_info(elf, e_phnum, &info, cprm->siginfo, cprm->regs))
|
||||
if (!fill_note_info(&elf, e_phnum, &info, cprm->siginfo, cprm->regs))
|
||||
goto cleanup;
|
||||
|
||||
has_dumped = 1;
|
||||
|
@ -2238,7 +2242,7 @@ static int elf_core_dump(struct coredump_params *cprm)
|
|||
fs = get_fs();
|
||||
set_fs(KERNEL_DS);
|
||||
|
||||
offset += sizeof(*elf); /* Elf header */
|
||||
offset += sizeof(elf); /* Elf header */
|
||||
offset += segs * sizeof(struct elf_phdr); /* Program headers */
|
||||
|
||||
/* Write notes phdr entry */
|
||||
|
@ -2257,11 +2261,13 @@ static int elf_core_dump(struct coredump_params *cprm)
|
|||
|
||||
dataoff = offset = roundup(offset, ELF_EXEC_PAGESIZE);
|
||||
|
||||
if (segs - 1 > ULONG_MAX / sizeof(*vma_filesz))
|
||||
goto end_coredump;
|
||||
/*
|
||||
* Zero vma process will get ZERO_SIZE_PTR here.
|
||||
* Let coredump continue for register state at least.
|
||||
*/
|
||||
vma_filesz = kvmalloc(array_size(sizeof(*vma_filesz), (segs - 1)),
|
||||
GFP_KERNEL);
|
||||
if (ZERO_OR_NULL_PTR(vma_filesz))
|
||||
if (!vma_filesz)
|
||||
goto end_coredump;
|
||||
|
||||
for (i = 0, vma = first_vma(current, gate_vma); vma != NULL;
|
||||
|
@ -2281,12 +2287,12 @@ static int elf_core_dump(struct coredump_params *cprm)
|
|||
shdr4extnum = kmalloc(sizeof(*shdr4extnum), GFP_KERNEL);
|
||||
if (!shdr4extnum)
|
||||
goto end_coredump;
|
||||
fill_extnum_info(elf, shdr4extnum, e_shoff, segs);
|
||||
fill_extnum_info(&elf, shdr4extnum, e_shoff, segs);
|
||||
}
|
||||
|
||||
offset = dataoff;
|
||||
|
||||
if (!dump_emit(cprm, elf, sizeof(*elf)))
|
||||
if (!dump_emit(cprm, &elf, sizeof(elf)))
|
||||
goto end_coredump;
|
||||
|
||||
if (!dump_emit(cprm, phdr4note, sizeof(*phdr4note)))
|
||||
|
@ -2370,8 +2376,6 @@ static int elf_core_dump(struct coredump_params *cprm)
|
|||
kfree(shdr4extnum);
|
||||
kvfree(vma_filesz);
|
||||
kfree(phdr4note);
|
||||
kfree(elf);
|
||||
out:
|
||||
return has_dumped;
|
||||
}
|
||||
|
||||
|
|
|
@ -1290,7 +1290,7 @@ int btrfs_decompress_buf2page(const char *buf, unsigned long buf_start,
|
|||
/* copy bytes from the working buffer into the pages */
|
||||
while (working_bytes > 0) {
|
||||
bytes = min_t(unsigned long, bvec.bv_len,
|
||||
PAGE_SIZE - buf_offset);
|
||||
PAGE_SIZE - (buf_offset % PAGE_SIZE));
|
||||
bytes = min(bytes, working_bytes);
|
||||
|
||||
kaddr = kmap_atomic(bvec.bv_page);
|
||||
|
|
135
fs/btrfs/zlib.c
135
fs/btrfs/zlib.c
|
@ -20,9 +20,13 @@
|
|||
#include <linux/refcount.h>
|
||||
#include "compression.h"
|
||||
|
||||
/* workspace buffer size for s390 zlib hardware support */
|
||||
#define ZLIB_DFLTCC_BUF_SIZE (4 * PAGE_SIZE)
|
||||
|
||||
struct workspace {
|
||||
z_stream strm;
|
||||
char *buf;
|
||||
unsigned int buf_size;
|
||||
struct list_head list;
|
||||
int level;
|
||||
};
|
||||
|
@ -61,7 +65,21 @@ struct list_head *zlib_alloc_workspace(unsigned int level)
|
|||
zlib_inflate_workspacesize());
|
||||
workspace->strm.workspace = kvmalloc(workspacesize, GFP_KERNEL);
|
||||
workspace->level = level;
|
||||
workspace->buf = kmalloc(PAGE_SIZE, GFP_KERNEL);
|
||||
workspace->buf = NULL;
|
||||
/*
|
||||
* In case of s390 zlib hardware support, allocate lager workspace
|
||||
* buffer. If allocator fails, fall back to a single page buffer.
|
||||
*/
|
||||
if (zlib_deflate_dfltcc_enabled()) {
|
||||
workspace->buf = kmalloc(ZLIB_DFLTCC_BUF_SIZE,
|
||||
__GFP_NOMEMALLOC | __GFP_NORETRY |
|
||||
__GFP_NOWARN | GFP_NOIO);
|
||||
workspace->buf_size = ZLIB_DFLTCC_BUF_SIZE;
|
||||
}
|
||||
if (!workspace->buf) {
|
||||
workspace->buf = kmalloc(PAGE_SIZE, GFP_KERNEL);
|
||||
workspace->buf_size = PAGE_SIZE;
|
||||
}
|
||||
if (!workspace->strm.workspace || !workspace->buf)
|
||||
goto fail;
|
||||
|
||||
|
@ -85,6 +103,7 @@ int zlib_compress_pages(struct list_head *ws, struct address_space *mapping,
|
|||
struct page *in_page = NULL;
|
||||
struct page *out_page = NULL;
|
||||
unsigned long bytes_left;
|
||||
unsigned int in_buf_pages;
|
||||
unsigned long len = *total_out;
|
||||
unsigned long nr_dest_pages = *out_pages;
|
||||
const unsigned long max_out = nr_dest_pages * PAGE_SIZE;
|
||||
|
@ -102,9 +121,6 @@ int zlib_compress_pages(struct list_head *ws, struct address_space *mapping,
|
|||
workspace->strm.total_in = 0;
|
||||
workspace->strm.total_out = 0;
|
||||
|
||||
in_page = find_get_page(mapping, start >> PAGE_SHIFT);
|
||||
data_in = kmap(in_page);
|
||||
|
||||
out_page = alloc_page(GFP_NOFS | __GFP_HIGHMEM);
|
||||
if (out_page == NULL) {
|
||||
ret = -ENOMEM;
|
||||
|
@ -114,12 +130,51 @@ int zlib_compress_pages(struct list_head *ws, struct address_space *mapping,
|
|||
pages[0] = out_page;
|
||||
nr_pages = 1;
|
||||
|
||||
workspace->strm.next_in = data_in;
|
||||
workspace->strm.next_in = workspace->buf;
|
||||
workspace->strm.avail_in = 0;
|
||||
workspace->strm.next_out = cpage_out;
|
||||
workspace->strm.avail_out = PAGE_SIZE;
|
||||
workspace->strm.avail_in = min(len, PAGE_SIZE);
|
||||
|
||||
while (workspace->strm.total_in < len) {
|
||||
/*
|
||||
* Get next input pages and copy the contents to
|
||||
* the workspace buffer if required.
|
||||
*/
|
||||
if (workspace->strm.avail_in == 0) {
|
||||
bytes_left = len - workspace->strm.total_in;
|
||||
in_buf_pages = min(DIV_ROUND_UP(bytes_left, PAGE_SIZE),
|
||||
workspace->buf_size / PAGE_SIZE);
|
||||
if (in_buf_pages > 1) {
|
||||
int i;
|
||||
|
||||
for (i = 0; i < in_buf_pages; i++) {
|
||||
if (in_page) {
|
||||
kunmap(in_page);
|
||||
put_page(in_page);
|
||||
}
|
||||
in_page = find_get_page(mapping,
|
||||
start >> PAGE_SHIFT);
|
||||
data_in = kmap(in_page);
|
||||
memcpy(workspace->buf + i * PAGE_SIZE,
|
||||
data_in, PAGE_SIZE);
|
||||
start += PAGE_SIZE;
|
||||
}
|
||||
workspace->strm.next_in = workspace->buf;
|
||||
} else {
|
||||
if (in_page) {
|
||||
kunmap(in_page);
|
||||
put_page(in_page);
|
||||
}
|
||||
in_page = find_get_page(mapping,
|
||||
start >> PAGE_SHIFT);
|
||||
data_in = kmap(in_page);
|
||||
start += PAGE_SIZE;
|
||||
workspace->strm.next_in = data_in;
|
||||
}
|
||||
workspace->strm.avail_in = min(bytes_left,
|
||||
(unsigned long) workspace->buf_size);
|
||||
}
|
||||
|
||||
ret = zlib_deflate(&workspace->strm, Z_SYNC_FLUSH);
|
||||
if (ret != Z_OK) {
|
||||
pr_debug("BTRFS: deflate in loop returned %d\n",
|
||||
|
@ -161,33 +216,43 @@ int zlib_compress_pages(struct list_head *ws, struct address_space *mapping,
|
|||
/* we're all done */
|
||||
if (workspace->strm.total_in >= len)
|
||||
break;
|
||||
|
||||
/* we've read in a full page, get a new one */
|
||||
if (workspace->strm.avail_in == 0) {
|
||||
if (workspace->strm.total_out > max_out)
|
||||
break;
|
||||
|
||||
bytes_left = len - workspace->strm.total_in;
|
||||
kunmap(in_page);
|
||||
put_page(in_page);
|
||||
|
||||
start += PAGE_SIZE;
|
||||
in_page = find_get_page(mapping,
|
||||
start >> PAGE_SHIFT);
|
||||
data_in = kmap(in_page);
|
||||
workspace->strm.avail_in = min(bytes_left,
|
||||
PAGE_SIZE);
|
||||
workspace->strm.next_in = data_in;
|
||||
}
|
||||
if (workspace->strm.total_out > max_out)
|
||||
break;
|
||||
}
|
||||
workspace->strm.avail_in = 0;
|
||||
ret = zlib_deflate(&workspace->strm, Z_FINISH);
|
||||
zlib_deflateEnd(&workspace->strm);
|
||||
|
||||
if (ret != Z_STREAM_END) {
|
||||
ret = -EIO;
|
||||
goto out;
|
||||
/*
|
||||
* Call deflate with Z_FINISH flush parameter providing more output
|
||||
* space but no more input data, until it returns with Z_STREAM_END.
|
||||
*/
|
||||
while (ret != Z_STREAM_END) {
|
||||
ret = zlib_deflate(&workspace->strm, Z_FINISH);
|
||||
if (ret == Z_STREAM_END)
|
||||
break;
|
||||
if (ret != Z_OK && ret != Z_BUF_ERROR) {
|
||||
zlib_deflateEnd(&workspace->strm);
|
||||
ret = -EIO;
|
||||
goto out;
|
||||
} else if (workspace->strm.avail_out == 0) {
|
||||
/* get another page for the stream end */
|
||||
kunmap(out_page);
|
||||
if (nr_pages == nr_dest_pages) {
|
||||
out_page = NULL;
|
||||
ret = -E2BIG;
|
||||
goto out;
|
||||
}
|
||||
out_page = alloc_page(GFP_NOFS | __GFP_HIGHMEM);
|
||||
if (out_page == NULL) {
|
||||
ret = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
cpage_out = kmap(out_page);
|
||||
pages[nr_pages] = out_page;
|
||||
nr_pages++;
|
||||
workspace->strm.avail_out = PAGE_SIZE;
|
||||
workspace->strm.next_out = cpage_out;
|
||||
}
|
||||
}
|
||||
zlib_deflateEnd(&workspace->strm);
|
||||
|
||||
if (workspace->strm.total_out >= workspace->strm.total_in) {
|
||||
ret = -E2BIG;
|
||||
|
@ -231,7 +296,7 @@ int zlib_decompress_bio(struct list_head *ws, struct compressed_bio *cb)
|
|||
|
||||
workspace->strm.total_out = 0;
|
||||
workspace->strm.next_out = workspace->buf;
|
||||
workspace->strm.avail_out = PAGE_SIZE;
|
||||
workspace->strm.avail_out = workspace->buf_size;
|
||||
|
||||
/* If it's deflate, and it's got no preset dictionary, then
|
||||
we can tell zlib to skip the adler32 check. */
|
||||
|
@ -270,7 +335,7 @@ int zlib_decompress_bio(struct list_head *ws, struct compressed_bio *cb)
|
|||
}
|
||||
|
||||
workspace->strm.next_out = workspace->buf;
|
||||
workspace->strm.avail_out = PAGE_SIZE;
|
||||
workspace->strm.avail_out = workspace->buf_size;
|
||||
|
||||
if (workspace->strm.avail_in == 0) {
|
||||
unsigned long tmp;
|
||||
|
@ -320,7 +385,7 @@ int zlib_decompress(struct list_head *ws, unsigned char *data_in,
|
|||
workspace->strm.total_in = 0;
|
||||
|
||||
workspace->strm.next_out = workspace->buf;
|
||||
workspace->strm.avail_out = PAGE_SIZE;
|
||||
workspace->strm.avail_out = workspace->buf_size;
|
||||
workspace->strm.total_out = 0;
|
||||
/* If it's deflate, and it's got no preset dictionary, then
|
||||
we can tell zlib to skip the adler32 check. */
|
||||
|
@ -364,7 +429,7 @@ int zlib_decompress(struct list_head *ws, unsigned char *data_in,
|
|||
buf_offset = 0;
|
||||
|
||||
bytes = min(PAGE_SIZE - pg_offset,
|
||||
PAGE_SIZE - buf_offset);
|
||||
PAGE_SIZE - (buf_offset % PAGE_SIZE));
|
||||
bytes = min(bytes, bytes_left);
|
||||
|
||||
kaddr = kmap_atomic(dest_page);
|
||||
|
@ -375,7 +440,7 @@ int zlib_decompress(struct list_head *ws, unsigned char *data_in,
|
|||
bytes_left -= bytes;
|
||||
next:
|
||||
workspace->strm.next_out = workspace->buf;
|
||||
workspace->strm.avail_out = PAGE_SIZE;
|
||||
workspace->strm.avail_out = workspace->buf_size;
|
||||
}
|
||||
|
||||
if (ret != Z_STREAM_END && bytes_left != 0)
|
||||
|
|
|
@ -760,6 +760,11 @@ int setup_arg_pages(struct linux_binprm *bprm,
|
|||
goto out_unlock;
|
||||
BUG_ON(prev != vma);
|
||||
|
||||
if (unlikely(vm_flags & VM_EXEC)) {
|
||||
pr_warn_once("process '%pD4' started with executable stack\n",
|
||||
bprm->file);
|
||||
}
|
||||
|
||||
/* Move stack pages down in memory. */
|
||||
if (stack_shift) {
|
||||
ret = shift_arg_pages(vma, stack_shift);
|
||||
|
|
|
@ -2063,7 +2063,7 @@ void wb_workfn(struct work_struct *work)
|
|||
struct bdi_writeback, dwork);
|
||||
long pages_written;
|
||||
|
||||
set_worker_desc("flush-%s", dev_name(wb->bdi->dev));
|
||||
set_worker_desc("flush-%s", bdi_dev_name(wb->bdi));
|
||||
current->flags |= PF_SWAPWRITE;
|
||||
|
||||
if (likely(!current_is_workqueue_rescuer() ||
|
||||
|
|
|
@ -6005,7 +6005,7 @@ static int io_sqe_buffer_unregister(struct io_ring_ctx *ctx)
|
|||
struct io_mapped_ubuf *imu = &ctx->user_bufs[i];
|
||||
|
||||
for (j = 0; j < imu->nr_bvecs; j++)
|
||||
put_user_page(imu->bvec[j].bv_page);
|
||||
unpin_user_page(imu->bvec[j].bv_page);
|
||||
|
||||
if (ctx->account_mem)
|
||||
io_unaccount_mem(ctx->user, imu->nr_bvecs);
|
||||
|
@ -6126,7 +6126,7 @@ static int io_sqe_buffer_register(struct io_ring_ctx *ctx, void __user *arg,
|
|||
|
||||
ret = 0;
|
||||
down_read(¤t->mm->mmap_sem);
|
||||
pret = get_user_pages(ubuf, nr_pages,
|
||||
pret = pin_user_pages(ubuf, nr_pages,
|
||||
FOLL_WRITE | FOLL_LONGTERM,
|
||||
pages, vmas);
|
||||
if (pret == nr_pages) {
|
||||
|
@ -6150,7 +6150,7 @@ static int io_sqe_buffer_register(struct io_ring_ctx *ctx, void __user *arg,
|
|||
* release any pages we did get
|
||||
*/
|
||||
if (pret > 0)
|
||||
put_user_pages(pages, pret);
|
||||
unpin_user_pages(pages, pret);
|
||||
if (ctx->account_mem)
|
||||
io_unaccount_mem(ctx->user, nr_pages);
|
||||
kvfree(imu->bvec);
|
||||
|
|
|
@ -73,7 +73,7 @@ static void o2quo_fence_self(void)
|
|||
"system by restarting ***\n");
|
||||
emergency_restart();
|
||||
break;
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
/* Indicate that a timeout occurred on a heartbeat region write. The
|
||||
|
|
|
@ -1,6 +1,4 @@
|
|||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
ccflags-y := -I $(srctree)/$(src)/..
|
||||
|
||||
obj-$(CONFIG_OCFS2_FS_O2CB) += ocfs2_dlm.o
|
||||
|
||||
ocfs2_dlm-objs := dlmdomain.o dlmdebug.o dlmthread.o dlmrecovery.o \
|
||||
|
|
|
@ -23,15 +23,15 @@
|
|||
#include <linux/spinlock.h>
|
||||
|
||||
|
||||
#include "cluster/heartbeat.h"
|
||||
#include "cluster/nodemanager.h"
|
||||
#include "cluster/tcp.h"
|
||||
#include "../cluster/heartbeat.h"
|
||||
#include "../cluster/nodemanager.h"
|
||||
#include "../cluster/tcp.h"
|
||||
|
||||
#include "dlmapi.h"
|
||||
#include "dlmcommon.h"
|
||||
|
||||
#define MLOG_MASK_PREFIX ML_DLM
|
||||
#include "cluster/masklog.h"
|
||||
#include "../cluster/masklog.h"
|
||||
|
||||
static void dlm_update_lvb(struct dlm_ctxt *dlm, struct dlm_lock_resource *res,
|
||||
struct dlm_lock *lock);
|
||||
|
|
|
@ -688,10 +688,6 @@ struct dlm_begin_reco
|
|||
__be32 pad2;
|
||||
};
|
||||
|
||||
|
||||
#define BITS_PER_BYTE 8
|
||||
#define BITS_TO_BYTES(bits) (((bits)+BITS_PER_BYTE-1)/BITS_PER_BYTE)
|
||||
|
||||
struct dlm_query_join_request
|
||||
{
|
||||
u8 node_idx;
|
||||
|
|
|
@ -23,9 +23,9 @@
|
|||
#include <linux/spinlock.h>
|
||||
|
||||
|
||||
#include "cluster/heartbeat.h"
|
||||
#include "cluster/nodemanager.h"
|
||||
#include "cluster/tcp.h"
|
||||
#include "../cluster/heartbeat.h"
|
||||
#include "../cluster/nodemanager.h"
|
||||
#include "../cluster/tcp.h"
|
||||
|
||||
#include "dlmapi.h"
|
||||
#include "dlmcommon.h"
|
||||
|
@ -33,7 +33,7 @@
|
|||
#include "dlmconvert.h"
|
||||
|
||||
#define MLOG_MASK_PREFIX ML_DLM
|
||||
#include "cluster/masklog.h"
|
||||
#include "../cluster/masklog.h"
|
||||
|
||||
/* NOTE: __dlmconvert_master is the only function in here that
|
||||
* needs a spinlock held on entry (res->spinlock) and it is the
|
||||
|
|
|
@ -17,9 +17,9 @@
|
|||
#include <linux/debugfs.h>
|
||||
#include <linux/export.h>
|
||||
|
||||
#include "cluster/heartbeat.h"
|
||||
#include "cluster/nodemanager.h"
|
||||
#include "cluster/tcp.h"
|
||||
#include "../cluster/heartbeat.h"
|
||||
#include "../cluster/nodemanager.h"
|
||||
#include "../cluster/tcp.h"
|
||||
|
||||
#include "dlmapi.h"
|
||||
#include "dlmcommon.h"
|
||||
|
@ -27,7 +27,7 @@
|
|||
#include "dlmdebug.h"
|
||||
|
||||
#define MLOG_MASK_PREFIX ML_DLM
|
||||
#include "cluster/masklog.h"
|
||||
#include "../cluster/masklog.h"
|
||||
|
||||
static int stringify_lockname(const char *lockname, int locklen, char *buf,
|
||||
int len);
|
||||
|
|
|
@ -20,9 +20,9 @@
|
|||
#include <linux/debugfs.h>
|
||||
#include <linux/sched/signal.h>
|
||||
|
||||
#include "cluster/heartbeat.h"
|
||||
#include "cluster/nodemanager.h"
|
||||
#include "cluster/tcp.h"
|
||||
#include "../cluster/heartbeat.h"
|
||||
#include "../cluster/nodemanager.h"
|
||||
#include "../cluster/tcp.h"
|
||||
|
||||
#include "dlmapi.h"
|
||||
#include "dlmcommon.h"
|
||||
|
@ -30,7 +30,7 @@
|
|||
#include "dlmdebug.h"
|
||||
|
||||
#define MLOG_MASK_PREFIX (ML_DLM|ML_DLM_DOMAIN)
|
||||
#include "cluster/masklog.h"
|
||||
#include "../cluster/masklog.h"
|
||||
|
||||
/*
|
||||
* ocfs2 node maps are array of long int, which limits to send them freely
|
||||
|
|
|
@ -25,9 +25,9 @@
|
|||
#include <linux/delay.h>
|
||||
|
||||
|
||||
#include "cluster/heartbeat.h"
|
||||
#include "cluster/nodemanager.h"
|
||||
#include "cluster/tcp.h"
|
||||
#include "../cluster/heartbeat.h"
|
||||
#include "../cluster/nodemanager.h"
|
||||
#include "../cluster/tcp.h"
|
||||
|
||||
#include "dlmapi.h"
|
||||
#include "dlmcommon.h"
|
||||
|
@ -35,7 +35,7 @@
|
|||
#include "dlmconvert.h"
|
||||
|
||||
#define MLOG_MASK_PREFIX ML_DLM
|
||||
#include "cluster/masklog.h"
|
||||
#include "../cluster/masklog.h"
|
||||
|
||||
static struct kmem_cache *dlm_lock_cache;
|
||||
|
||||
|
|
|
@ -25,9 +25,9 @@
|
|||
#include <linux/delay.h>
|
||||
|
||||
|
||||
#include "cluster/heartbeat.h"
|
||||
#include "cluster/nodemanager.h"
|
||||
#include "cluster/tcp.h"
|
||||
#include "../cluster/heartbeat.h"
|
||||
#include "../cluster/nodemanager.h"
|
||||
#include "../cluster/tcp.h"
|
||||
|
||||
#include "dlmapi.h"
|
||||
#include "dlmcommon.h"
|
||||
|
@ -35,7 +35,7 @@
|
|||
#include "dlmdebug.h"
|
||||
|
||||
#define MLOG_MASK_PREFIX (ML_DLM|ML_DLM_MASTER)
|
||||
#include "cluster/masklog.h"
|
||||
#include "../cluster/masklog.h"
|
||||
|
||||
static void dlm_mle_node_down(struct dlm_ctxt *dlm,
|
||||
struct dlm_master_list_entry *mle,
|
||||
|
@ -2554,8 +2554,6 @@ static int dlm_migrate_lockres(struct dlm_ctxt *dlm,
|
|||
if (!dlm_grab(dlm))
|
||||
return -EINVAL;
|
||||
|
||||
BUG_ON(target == O2NM_MAX_NODES);
|
||||
|
||||
name = res->lockname.name;
|
||||
namelen = res->lockname.len;
|
||||
|
||||
|
|
|
@ -26,16 +26,16 @@
|
|||
#include <linux/delay.h>
|
||||
|
||||
|
||||
#include "cluster/heartbeat.h"
|
||||
#include "cluster/nodemanager.h"
|
||||
#include "cluster/tcp.h"
|
||||
#include "../cluster/heartbeat.h"
|
||||
#include "../cluster/nodemanager.h"
|
||||
#include "../cluster/tcp.h"
|
||||
|
||||
#include "dlmapi.h"
|
||||
#include "dlmcommon.h"
|
||||
#include "dlmdomain.h"
|
||||
|
||||
#define MLOG_MASK_PREFIX (ML_DLM|ML_DLM_RECOVERY)
|
||||
#include "cluster/masklog.h"
|
||||
#include "../cluster/masklog.h"
|
||||
|
||||
static void dlm_do_local_recovery_cleanup(struct dlm_ctxt *dlm, u8 dead_node);
|
||||
|
||||
|
@ -1668,7 +1668,7 @@ static int dlm_lockres_master_requery(struct dlm_ctxt *dlm,
|
|||
int dlm_do_master_requery(struct dlm_ctxt *dlm, struct dlm_lock_resource *res,
|
||||
u8 nodenum, u8 *real_master)
|
||||
{
|
||||
int ret = -EINVAL;
|
||||
int ret;
|
||||
struct dlm_master_requery req;
|
||||
int status = DLM_LOCK_RES_OWNER_UNKNOWN;
|
||||
|
||||
|
|
|
@ -25,16 +25,16 @@
|
|||
#include <linux/delay.h>
|
||||
|
||||
|
||||
#include "cluster/heartbeat.h"
|
||||
#include "cluster/nodemanager.h"
|
||||
#include "cluster/tcp.h"
|
||||
#include "../cluster/heartbeat.h"
|
||||
#include "../cluster/nodemanager.h"
|
||||
#include "../cluster/tcp.h"
|
||||
|
||||
#include "dlmapi.h"
|
||||
#include "dlmcommon.h"
|
||||
#include "dlmdomain.h"
|
||||
|
||||
#define MLOG_MASK_PREFIX (ML_DLM|ML_DLM_THREAD)
|
||||
#include "cluster/masklog.h"
|
||||
#include "../cluster/masklog.h"
|
||||
|
||||
static int dlm_thread(void *data);
|
||||
static void dlm_flush_asts(struct dlm_ctxt *dlm);
|
||||
|
|
|
@ -23,15 +23,15 @@
|
|||
#include <linux/spinlock.h>
|
||||
#include <linux/delay.h>
|
||||
|
||||
#include "cluster/heartbeat.h"
|
||||
#include "cluster/nodemanager.h"
|
||||
#include "cluster/tcp.h"
|
||||
#include "../cluster/heartbeat.h"
|
||||
#include "../cluster/nodemanager.h"
|
||||
#include "../cluster/tcp.h"
|
||||
|
||||
#include "dlmapi.h"
|
||||
#include "dlmcommon.h"
|
||||
|
||||
#define MLOG_MASK_PREFIX ML_DLM
|
||||
#include "cluster/masklog.h"
|
||||
#include "../cluster/masklog.h"
|
||||
|
||||
#define DLM_UNLOCK_FREE_LOCK 0x00000001
|
||||
#define DLM_UNLOCK_CALL_AST 0x00000002
|
||||
|
|
|
@ -1,6 +1,4 @@
|
|||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
ccflags-y := -I $(srctree)/$(src)/..
|
||||
|
||||
obj-$(CONFIG_OCFS2_FS) += ocfs2_dlmfs.o
|
||||
|
||||
ocfs2_dlmfs-objs := userdlm.o dlmfs.o
|
||||
|
|
|
@ -33,11 +33,11 @@
|
|||
|
||||
#include <linux/uaccess.h>
|
||||
|
||||
#include "stackglue.h"
|
||||
#include "../stackglue.h"
|
||||
#include "userdlm.h"
|
||||
|
||||
#define MLOG_MASK_PREFIX ML_DLMFS
|
||||
#include "cluster/masklog.h"
|
||||
#include "../cluster/masklog.h"
|
||||
|
||||
|
||||
static const struct super_operations dlmfs_ops;
|
||||
|
|
|
@ -21,12 +21,12 @@
|
|||
#include <linux/types.h>
|
||||
#include <linux/crc32.h>
|
||||
|
||||
#include "ocfs2_lockingver.h"
|
||||
#include "stackglue.h"
|
||||
#include "../ocfs2_lockingver.h"
|
||||
#include "../stackglue.h"
|
||||
#include "userdlm.h"
|
||||
|
||||
#define MLOG_MASK_PREFIX ML_DLMFS
|
||||
#include "cluster/masklog.h"
|
||||
#include "../cluster/masklog.h"
|
||||
|
||||
|
||||
static inline struct user_lock_res *user_lksb_to_lock_res(struct ocfs2_dlm_lksb *lksb)
|
||||
|
|
|
@ -570,7 +570,7 @@ void ocfs2_inode_lock_res_init(struct ocfs2_lock_res *res,
|
|||
mlog_bug_on_msg(1, "type: %d\n", type);
|
||||
ops = NULL; /* thanks, gcc */
|
||||
break;
|
||||
};
|
||||
}
|
||||
|
||||
ocfs2_build_lock_name(type, OCFS2_I(inode)->ip_blkno,
|
||||
generation, res->l_name);
|
||||
|
|
|
@ -597,9 +597,11 @@ static inline void ocfs2_update_inode_fsync_trans(handle_t *handle,
|
|||
{
|
||||
struct ocfs2_inode_info *oi = OCFS2_I(inode);
|
||||
|
||||
oi->i_sync_tid = handle->h_transaction->t_tid;
|
||||
if (datasync)
|
||||
oi->i_datasync_tid = handle->h_transaction->t_tid;
|
||||
if (!is_handle_aborted(handle)) {
|
||||
oi->i_sync_tid = handle->h_transaction->t_tid;
|
||||
if (datasync)
|
||||
oi->i_datasync_tid = handle->h_transaction->t_tid;
|
||||
}
|
||||
}
|
||||
|
||||
#endif /* OCFS2_JOURNAL_H */
|
||||
|
|
|
@ -586,8 +586,7 @@ static int __ocfs2_mknod_locked(struct inode *dir,
|
|||
mlog_errno(status);
|
||||
}
|
||||
|
||||
oi->i_sync_tid = handle->h_transaction->t_tid;
|
||||
oi->i_datasync_tid = handle->h_transaction->t_tid;
|
||||
ocfs2_update_inode_fsync_trans(handle, inode, 1);
|
||||
|
||||
leave:
|
||||
if (status < 0) {
|
||||
|
|
|
@ -2240,7 +2240,8 @@ int reiserfs_insert_item(struct reiserfs_transaction_handle *th,
|
|||
/* also releases the path */
|
||||
unfix_nodes(&s_ins_balance);
|
||||
#ifdef REISERQUOTA_DEBUG
|
||||
reiserfs_debug(th->t_super, REISERFS_DEBUG_CODE,
|
||||
if (inode)
|
||||
reiserfs_debug(th->t_super, REISERFS_DEBUG_CODE,
|
||||
"reiserquota insert_item(): freeing %u id=%u type=%c",
|
||||
quota_bytes, inode->i_uid, head2type(ih));
|
||||
#endif
|
||||
|
|
|
@ -13,6 +13,7 @@
|
|||
#include <linux/fs.h>
|
||||
#include <linux/sched.h>
|
||||
#include <linux/blkdev.h>
|
||||
#include <linux/device.h>
|
||||
#include <linux/writeback.h>
|
||||
#include <linux/blk-cgroup.h>
|
||||
#include <linux/backing-dev-defs.h>
|
||||
|
@ -504,4 +505,13 @@ static inline int bdi_rw_congested(struct backing_dev_info *bdi)
|
|||
(1 << WB_async_congested));
|
||||
}
|
||||
|
||||
extern const char *bdi_unknown_name;
|
||||
|
||||
static inline const char *bdi_dev_name(struct backing_dev_info *bdi)
|
||||
{
|
||||
if (!bdi || !bdi->dev)
|
||||
return bdi_unknown_name;
|
||||
return dev_name(bdi->dev);
|
||||
}
|
||||
|
||||
#endif /* _LINUX_BACKING_DEV_H */
|
||||
|
|
|
@ -13,6 +13,7 @@
|
|||
|
||||
#define BITS_PER_TYPE(type) (sizeof(type) * BITS_PER_BYTE)
|
||||
#define BITS_TO_LONGS(nr) DIV_ROUND_UP(nr, BITS_PER_TYPE(long))
|
||||
#define BITS_TO_BYTES(nr) DIV_ROUND_UP(nr, BITS_PER_TYPE(char))
|
||||
|
||||
extern unsigned int __sw_hweight8(unsigned int w);
|
||||
extern unsigned int __sw_hweight16(unsigned int w);
|
||||
|
|
|
@ -2737,7 +2737,6 @@ static inline int filemap_fdatawait(struct address_space *mapping)
|
|||
|
||||
extern bool filemap_range_has_page(struct address_space *, loff_t lstart,
|
||||
loff_t lend);
|
||||
extern int filemap_write_and_wait(struct address_space *mapping);
|
||||
extern int filemap_write_and_wait_range(struct address_space *mapping,
|
||||
loff_t lstart, loff_t lend);
|
||||
extern int __filemap_fdatawrite_range(struct address_space *mapping,
|
||||
|
@ -2747,6 +2746,11 @@ extern int filemap_fdatawrite_range(struct address_space *mapping,
|
|||
extern int filemap_check_errors(struct address_space *mapping);
|
||||
extern void __filemap_set_wb_err(struct address_space *mapping, int err);
|
||||
|
||||
static inline int filemap_write_and_wait(struct address_space *mapping)
|
||||
{
|
||||
return filemap_write_and_wait_range(mapping, 0, LLONG_MAX);
|
||||
}
|
||||
|
||||
extern int __must_check file_fdatawait_range(struct file *file, loff_t lstart,
|
||||
loff_t lend);
|
||||
extern int __must_check file_check_and_advance_wb_err(struct file *file);
|
||||
|
|
|
@ -28,6 +28,7 @@ struct io_mapping {
|
|||
|
||||
#ifdef CONFIG_HAVE_ATOMIC_IOMAP
|
||||
|
||||
#include <linux/pfn.h>
|
||||
#include <asm/iomap.h>
|
||||
/*
|
||||
* For small address space machines, mapping large objects
|
||||
|
@ -64,12 +65,10 @@ io_mapping_map_atomic_wc(struct io_mapping *mapping,
|
|||
unsigned long offset)
|
||||
{
|
||||
resource_size_t phys_addr;
|
||||
unsigned long pfn;
|
||||
|
||||
BUG_ON(offset >= mapping->size);
|
||||
phys_addr = mapping->base + offset;
|
||||
pfn = (unsigned long) (phys_addr >> PAGE_SHIFT);
|
||||
return iomap_atomic_prot_pfn(pfn, mapping->prot);
|
||||
return iomap_atomic_prot_pfn(PHYS_PFN(phys_addr), mapping->prot);
|
||||
}
|
||||
|
||||
static inline void
|
||||
|
|
|
@ -113,6 +113,9 @@ int memblock_add(phys_addr_t base, phys_addr_t size);
|
|||
int memblock_remove(phys_addr_t base, phys_addr_t size);
|
||||
int memblock_free(phys_addr_t base, phys_addr_t size);
|
||||
int memblock_reserve(phys_addr_t base, phys_addr_t size);
|
||||
#ifdef CONFIG_HAVE_MEMBLOCK_PHYS_MAP
|
||||
int memblock_physmem_add(phys_addr_t base, phys_addr_t size);
|
||||
#endif
|
||||
void memblock_trim_memory(phys_addr_t align);
|
||||
bool memblock_overlaps_region(struct memblock_type *type,
|
||||
phys_addr_t base, phys_addr_t size);
|
||||
|
@ -127,10 +130,6 @@ void reset_node_managed_pages(pg_data_t *pgdat);
|
|||
void reset_all_zones_managed_pages(void);
|
||||
|
||||
/* Low level functions */
|
||||
int memblock_add_range(struct memblock_type *type,
|
||||
phys_addr_t base, phys_addr_t size,
|
||||
int nid, enum memblock_flags flags);
|
||||
|
||||
void __next_mem_range(u64 *idx, int nid, enum memblock_flags flags,
|
||||
struct memblock_type *type_a,
|
||||
struct memblock_type *type_b, phys_addr_t *out_start,
|
||||
|
|
|
@ -29,8 +29,6 @@ struct memory_block {
|
|||
int section_count; /* serialized by mem_sysfs_mutex */
|
||||
int online_type; /* for passing data to online routine */
|
||||
int phys_device; /* to which fru does this belong? */
|
||||
void *hw; /* optional pointer to fw/hw data */
|
||||
int (*phys_callback)(struct memory_block *);
|
||||
struct device dev;
|
||||
int nid; /* NID for this memory block */
|
||||
};
|
||||
|
@ -55,19 +53,6 @@ struct memory_notify {
|
|||
int status_change_nid;
|
||||
};
|
||||
|
||||
/*
|
||||
* During pageblock isolation, count the number of pages within the
|
||||
* range [start_pfn, start_pfn + nr_pages) which are owned by code
|
||||
* in the notifier chain.
|
||||
*/
|
||||
#define MEM_ISOLATE_COUNT (1<<0)
|
||||
|
||||
struct memory_isolate_notify {
|
||||
unsigned long start_pfn; /* Start of range to check */
|
||||
unsigned int nr_pages; /* # pages in range to check */
|
||||
unsigned int pages_found; /* # pages owned found by callbacks */
|
||||
};
|
||||
|
||||
struct notifier_block;
|
||||
struct mem_section;
|
||||
|
||||
|
@ -94,27 +79,13 @@ static inline int memory_notify(unsigned long val, void *v)
|
|||
{
|
||||
return 0;
|
||||
}
|
||||
static inline int register_memory_isolate_notifier(struct notifier_block *nb)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
static inline void unregister_memory_isolate_notifier(struct notifier_block *nb)
|
||||
{
|
||||
}
|
||||
static inline int memory_isolate_notify(unsigned long val, void *v)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
#else
|
||||
extern int register_memory_notifier(struct notifier_block *nb);
|
||||
extern void unregister_memory_notifier(struct notifier_block *nb);
|
||||
extern int register_memory_isolate_notifier(struct notifier_block *nb);
|
||||
extern void unregister_memory_isolate_notifier(struct notifier_block *nb);
|
||||
int create_memory_block_devices(unsigned long start, unsigned long size);
|
||||
void remove_memory_block_devices(unsigned long start, unsigned long size);
|
||||
extern void memory_dev_init(void);
|
||||
extern int memory_notify(unsigned long val, void *v);
|
||||
extern int memory_isolate_notify(unsigned long val, void *v);
|
||||
extern struct memory_block *find_memory_block(struct mem_section *);
|
||||
typedef int (*walk_memory_blocks_func_t)(struct memory_block *, void *);
|
||||
extern int walk_memory_blocks(unsigned long start, unsigned long size,
|
||||
|
|
|
@ -94,7 +94,8 @@ extern int zone_grow_free_lists(struct zone *zone, unsigned long new_nr_pages);
|
|||
extern int zone_grow_waitqueues(struct zone *zone, unsigned long nr_pages);
|
||||
extern int add_one_highpage(struct page *page, int pfn, int bad_ppro);
|
||||
/* VM interface that may be used by firmware interface */
|
||||
extern int online_pages(unsigned long, unsigned long, int);
|
||||
extern int online_pages(unsigned long pfn, unsigned long nr_pages,
|
||||
int online_type, int nid);
|
||||
extern int test_pages_in_a_zone(unsigned long start_pfn, unsigned long end_pfn,
|
||||
unsigned long *valid_start, unsigned long *valid_end);
|
||||
extern unsigned long __offline_isolated_pages(unsigned long start_pfn,
|
||||
|
|
|
@ -70,11 +70,6 @@ static inline void totalram_pages_add(long count)
|
|||
atomic_long_add(count, &_totalram_pages);
|
||||
}
|
||||
|
||||
static inline void totalram_pages_set(long val)
|
||||
{
|
||||
atomic_long_set(&_totalram_pages, val);
|
||||
}
|
||||
|
||||
extern void * high_memory;
|
||||
extern int page_cluster;
|
||||
|
||||
|
@ -916,10 +911,6 @@ vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf);
|
|||
|
||||
#define ZONEID_PGSHIFT (ZONEID_PGOFF * (ZONEID_SHIFT != 0))
|
||||
|
||||
#if SECTIONS_WIDTH+NODES_WIDTH+ZONES_WIDTH > BITS_PER_LONG - NR_PAGEFLAGS
|
||||
#error SECTIONS_WIDTH+NODES_WIDTH+ZONES_WIDTH > BITS_PER_LONG - NR_PAGEFLAGS
|
||||
#endif
|
||||
|
||||
#define ZONES_MASK ((1UL << ZONES_WIDTH) - 1)
|
||||
#define NODES_MASK ((1UL << NODES_WIDTH) - 1)
|
||||
#define SECTIONS_MASK ((1UL << SECTIONS_WIDTH) - 1)
|
||||
|
@ -947,9 +938,10 @@ static inline bool is_zone_device_page(const struct page *page)
|
|||
#endif
|
||||
|
||||
#ifdef CONFIG_DEV_PAGEMAP_OPS
|
||||
void __put_devmap_managed_page(struct page *page);
|
||||
void free_devmap_managed_page(struct page *page);
|
||||
DECLARE_STATIC_KEY_FALSE(devmap_managed_key);
|
||||
static inline bool put_devmap_managed_page(struct page *page)
|
||||
|
||||
static inline bool page_is_devmap_managed(struct page *page)
|
||||
{
|
||||
if (!static_branch_unlikely(&devmap_managed_key))
|
||||
return false;
|
||||
|
@ -958,7 +950,6 @@ static inline bool put_devmap_managed_page(struct page *page)
|
|||
switch (page->pgmap->type) {
|
||||
case MEMORY_DEVICE_PRIVATE:
|
||||
case MEMORY_DEVICE_FS_DAX:
|
||||
__put_devmap_managed_page(page);
|
||||
return true;
|
||||
default:
|
||||
break;
|
||||
|
@ -966,11 +957,17 @@ static inline bool put_devmap_managed_page(struct page *page)
|
|||
return false;
|
||||
}
|
||||
|
||||
void put_devmap_managed_page(struct page *page);
|
||||
|
||||
#else /* CONFIG_DEV_PAGEMAP_OPS */
|
||||
static inline bool put_devmap_managed_page(struct page *page)
|
||||
static inline bool page_is_devmap_managed(struct page *page)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
static inline void put_devmap_managed_page(struct page *page)
|
||||
{
|
||||
}
|
||||
#endif /* CONFIG_DEV_PAGEMAP_OPS */
|
||||
|
||||
static inline bool is_device_private_page(const struct page *page)
|
||||
|
@ -1023,37 +1020,37 @@ static inline void put_page(struct page *page)
|
|||
* need to inform the device driver through callback. See
|
||||
* include/linux/memremap.h and HMM for details.
|
||||
*/
|
||||
if (put_devmap_managed_page(page))
|
||||
if (page_is_devmap_managed(page)) {
|
||||
put_devmap_managed_page(page);
|
||||
return;
|
||||
}
|
||||
|
||||
if (put_page_testzero(page))
|
||||
__put_page(page);
|
||||
}
|
||||
|
||||
/**
|
||||
* put_user_page() - release a gup-pinned page
|
||||
* unpin_user_page() - release a gup-pinned page
|
||||
* @page: pointer to page to be released
|
||||
*
|
||||
* Pages that were pinned via get_user_pages*() must be released via
|
||||
* either put_user_page(), or one of the put_user_pages*() routines
|
||||
* below. This is so that eventually, pages that are pinned via
|
||||
* get_user_pages*() can be separately tracked and uniquely handled. In
|
||||
* particular, interactions with RDMA and filesystems need special
|
||||
* handling.
|
||||
* Pages that were pinned via pin_user_pages*() must be released via either
|
||||
* unpin_user_page(), or one of the unpin_user_pages*() routines. This is so
|
||||
* that eventually such pages can be separately tracked and uniquely handled. In
|
||||
* particular, interactions with RDMA and filesystems need special handling.
|
||||
*
|
||||
* put_user_page() and put_page() are not interchangeable, despite this early
|
||||
* implementation that makes them look the same. put_user_page() calls must
|
||||
* be perfectly matched up with get_user_page() calls.
|
||||
* unpin_user_page() and put_page() are not interchangeable, despite this early
|
||||
* implementation that makes them look the same. unpin_user_page() calls must
|
||||
* be perfectly matched up with pin*() calls.
|
||||
*/
|
||||
static inline void put_user_page(struct page *page)
|
||||
static inline void unpin_user_page(struct page *page)
|
||||
{
|
||||
put_page(page);
|
||||
}
|
||||
|
||||
void put_user_pages_dirty_lock(struct page **pages, unsigned long npages,
|
||||
bool make_dirty);
|
||||
void unpin_user_pages_dirty_lock(struct page **pages, unsigned long npages,
|
||||
bool make_dirty);
|
||||
|
||||
void put_user_pages(struct page **pages, unsigned long npages);
|
||||
void unpin_user_pages(struct page **pages, unsigned long npages);
|
||||
|
||||
#if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP)
|
||||
#define SECTION_IN_PAGE_FLAGS
|
||||
|
@ -1501,9 +1498,16 @@ long get_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm,
|
|||
unsigned long start, unsigned long nr_pages,
|
||||
unsigned int gup_flags, struct page **pages,
|
||||
struct vm_area_struct **vmas, int *locked);
|
||||
long pin_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm,
|
||||
unsigned long start, unsigned long nr_pages,
|
||||
unsigned int gup_flags, struct page **pages,
|
||||
struct vm_area_struct **vmas, int *locked);
|
||||
long get_user_pages(unsigned long start, unsigned long nr_pages,
|
||||
unsigned int gup_flags, struct page **pages,
|
||||
struct vm_area_struct **vmas);
|
||||
long pin_user_pages(unsigned long start, unsigned long nr_pages,
|
||||
unsigned int gup_flags, struct page **pages,
|
||||
struct vm_area_struct **vmas);
|
||||
long get_user_pages_locked(unsigned long start, unsigned long nr_pages,
|
||||
unsigned int gup_flags, struct page **pages, int *locked);
|
||||
long get_user_pages_unlocked(unsigned long start, unsigned long nr_pages,
|
||||
|
@ -1511,6 +1515,8 @@ long get_user_pages_unlocked(unsigned long start, unsigned long nr_pages,
|
|||
|
||||
int get_user_pages_fast(unsigned long start, int nr_pages,
|
||||
unsigned int gup_flags, struct page **pages);
|
||||
int pin_user_pages_fast(unsigned long start, int nr_pages,
|
||||
unsigned int gup_flags, struct page **pages);
|
||||
|
||||
int account_locked_vm(struct mm_struct *mm, unsigned long pages, bool inc);
|
||||
int __account_locked_vm(struct mm_struct *mm, unsigned long pages, bool inc,
|
||||
|
@ -2575,13 +2581,15 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address,
|
|||
#define FOLL_ANON 0x8000 /* don't do file mappings */
|
||||
#define FOLL_LONGTERM 0x10000 /* mapping lifetime is indefinite: see below */
|
||||
#define FOLL_SPLIT_PMD 0x20000 /* split huge pmd before returning */
|
||||
#define FOLL_PIN 0x40000 /* pages must be released via unpin_user_page */
|
||||
|
||||
/*
|
||||
* NOTE on FOLL_LONGTERM:
|
||||
* FOLL_PIN and FOLL_LONGTERM may be used in various combinations with each
|
||||
* other. Here is what they mean, and how to use them:
|
||||
*
|
||||
* FOLL_LONGTERM indicates that the page will be held for an indefinite time
|
||||
* period _often_ under userspace control. This is contrasted with
|
||||
* iov_iter_get_pages() where usages which are transient.
|
||||
* period _often_ under userspace control. This is in contrast to
|
||||
* iov_iter_get_pages(), whose usages are transient.
|
||||
*
|
||||
* FIXME: For pages which are part of a filesystem, mappings are subject to the
|
||||
* lifetime enforced by the filesystem and we need guarantees that longterm
|
||||
|
@ -2596,11 +2604,39 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address,
|
|||
* Currently only get_user_pages() and get_user_pages_fast() support this flag
|
||||
* and calls to get_user_pages_[un]locked are specifically not allowed. This
|
||||
* is due to an incompatibility with the FS DAX check and
|
||||
* FAULT_FLAG_ALLOW_RETRY
|
||||
* FAULT_FLAG_ALLOW_RETRY.
|
||||
*
|
||||
* In the CMA case: longterm pins in a CMA region would unnecessarily fragment
|
||||
* that region. And so CMA attempts to migrate the page before pinning when
|
||||
* In the CMA case: long term pins in a CMA region would unnecessarily fragment
|
||||
* that region. And so, CMA attempts to migrate the page before pinning, when
|
||||
* FOLL_LONGTERM is specified.
|
||||
*
|
||||
* FOLL_PIN indicates that a special kind of tracking (not just page->_refcount,
|
||||
* but an additional pin counting system) will be invoked. This is intended for
|
||||
* anything that gets a page reference and then touches page data (for example,
|
||||
* Direct IO). This lets the filesystem know that some non-file-system entity is
|
||||
* potentially changing the pages' data. In contrast to FOLL_GET (whose pages
|
||||
* are released via put_page()), FOLL_PIN pages must be released, ultimately, by
|
||||
* a call to unpin_user_page().
|
||||
*
|
||||
* FOLL_PIN is similar to FOLL_GET: both of these pin pages. They use different
|
||||
* and separate refcounting mechanisms, however, and that means that each has
|
||||
* its own acquire and release mechanisms:
|
||||
*
|
||||
* FOLL_GET: get_user_pages*() to acquire, and put_page() to release.
|
||||
*
|
||||
* FOLL_PIN: pin_user_pages*() to acquire, and unpin_user_pages to release.
|
||||
*
|
||||
* FOLL_PIN and FOLL_GET are mutually exclusive for a given function call.
|
||||
* (The underlying pages may experience both FOLL_GET-based and FOLL_PIN-based
|
||||
* calls applied to them, and that's perfectly OK. This is a constraint on the
|
||||
* callers, not on the pages.)
|
||||
*
|
||||
* FOLL_PIN should be set internally by the pin_user_pages*() APIs, never
|
||||
* directly by the caller. That's in order to help avoid mismatches when
|
||||
* releasing pages: get_user_pages*() pages must be released via put_page(),
|
||||
* while pin_user_pages*() pages must be released via unpin_user_page().
|
||||
*
|
||||
* Please see Documentation/vm/pin_user_pages.rst for more information.
|
||||
*/
|
||||
|
||||
static inline int vm_fault_to_errno(vm_fault_t vm_fault, int foll_flags)
|
||||
|
|
|
@ -758,7 +758,7 @@ typedef struct pglist_data {
|
|||
|
||||
#ifdef CONFIG_NUMA
|
||||
/*
|
||||
* zone reclaim becomes active if more unmapped pages exist.
|
||||
* node reclaim becomes active if more unmapped pages exist.
|
||||
*/
|
||||
unsigned long min_unmapped_pages;
|
||||
unsigned long min_slab_pages;
|
||||
|
|
|
@ -33,8 +33,8 @@ static inline bool is_migrate_isolate(int migratetype)
|
|||
#define MEMORY_OFFLINE 0x1
|
||||
#define REPORT_FAILURE 0x2
|
||||
|
||||
bool has_unmovable_pages(struct zone *zone, struct page *page, int count,
|
||||
int migratetype, int flags);
|
||||
struct page *has_unmovable_pages(struct zone *zone, struct page *page,
|
||||
int migratetype, int flags);
|
||||
void set_pageblock_migratetype(struct page *page, int migratetype);
|
||||
int move_freepages_block(struct zone *zone, struct page *page,
|
||||
int migratetype, int *num_movable);
|
||||
|
|
|
@ -7,6 +7,7 @@
|
|||
# define swab16 __swab16
|
||||
# define swab32 __swab32
|
||||
# define swab64 __swab64
|
||||
# define swab __swab
|
||||
# define swahw32 __swahw32
|
||||
# define swahb32 __swahb32
|
||||
# define swab16p __swab16p
|
||||
|
|
|
@ -32,17 +32,6 @@
|
|||
/* use value, which < 0K, to indicate an invalid/uninitialized temperature */
|
||||
#define THERMAL_TEMP_INVALID -274000
|
||||
|
||||
/* Unit conversion macros */
|
||||
#define DECI_KELVIN_TO_CELSIUS(t) ({ \
|
||||
long _t = (t); \
|
||||
((_t-2732 >= 0) ? (_t-2732+5)/10 : (_t-2732-5)/10); \
|
||||
})
|
||||
#define CELSIUS_TO_DECI_KELVIN(t) ((t)*10+2732)
|
||||
#define DECI_KELVIN_TO_MILLICELSIUS_WITH_OFFSET(t, off) (((t) - (off)) * 100)
|
||||
#define DECI_KELVIN_TO_MILLICELSIUS(t) DECI_KELVIN_TO_MILLICELSIUS_WITH_OFFSET(t, 2732)
|
||||
#define MILLICELSIUS_TO_DECI_KELVIN_WITH_OFFSET(t, off) (((t) / 100) + (off))
|
||||
#define MILLICELSIUS_TO_DECI_KELVIN(t) MILLICELSIUS_TO_DECI_KELVIN_WITH_OFFSET(t, 2732)
|
||||
|
||||
/* Default Thermal Governor */
|
||||
#if defined(CONFIG_THERMAL_DEFAULT_GOV_STEP_WISE)
|
||||
#define DEFAULT_THERMAL_GOVERNOR "step_wise"
|
||||
|
|
84
include/linux/units.h
Normal file
84
include/linux/units.h
Normal file
|
@ -0,0 +1,84 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
#ifndef _LINUX_UNITS_H
|
||||
#define _LINUX_UNITS_H
|
||||
|
||||
#include <linux/kernel.h>
|
||||
|
||||
#define ABSOLUTE_ZERO_MILLICELSIUS -273150
|
||||
|
||||
static inline long milli_kelvin_to_millicelsius(long t)
|
||||
{
|
||||
return t + ABSOLUTE_ZERO_MILLICELSIUS;
|
||||
}
|
||||
|
||||
static inline long millicelsius_to_milli_kelvin(long t)
|
||||
{
|
||||
return t - ABSOLUTE_ZERO_MILLICELSIUS;
|
||||
}
|
||||
|
||||
#define MILLIDEGREE_PER_DEGREE 1000
|
||||
#define MILLIDEGREE_PER_DECIDEGREE 100
|
||||
|
||||
static inline long kelvin_to_millicelsius(long t)
|
||||
{
|
||||
return milli_kelvin_to_millicelsius(t * MILLIDEGREE_PER_DEGREE);
|
||||
}
|
||||
|
||||
static inline long millicelsius_to_kelvin(long t)
|
||||
{
|
||||
t = millicelsius_to_milli_kelvin(t);
|
||||
|
||||
return DIV_ROUND_CLOSEST(t, MILLIDEGREE_PER_DEGREE);
|
||||
}
|
||||
|
||||
static inline long deci_kelvin_to_celsius(long t)
|
||||
{
|
||||
t = milli_kelvin_to_millicelsius(t * MILLIDEGREE_PER_DECIDEGREE);
|
||||
|
||||
return DIV_ROUND_CLOSEST(t, MILLIDEGREE_PER_DEGREE);
|
||||
}
|
||||
|
||||
static inline long celsius_to_deci_kelvin(long t)
|
||||
{
|
||||
t = millicelsius_to_milli_kelvin(t * MILLIDEGREE_PER_DEGREE);
|
||||
|
||||
return DIV_ROUND_CLOSEST(t, MILLIDEGREE_PER_DECIDEGREE);
|
||||
}
|
||||
|
||||
/**
|
||||
* deci_kelvin_to_millicelsius_with_offset - convert Kelvin to Celsius
|
||||
* @t: temperature value in decidegrees Kelvin
|
||||
* @offset: difference between Kelvin and Celsius in millidegrees
|
||||
*
|
||||
* Return: temperature value in millidegrees Celsius
|
||||
*/
|
||||
static inline long deci_kelvin_to_millicelsius_with_offset(long t, long offset)
|
||||
{
|
||||
return t * MILLIDEGREE_PER_DECIDEGREE - offset;
|
||||
}
|
||||
|
||||
static inline long deci_kelvin_to_millicelsius(long t)
|
||||
{
|
||||
return milli_kelvin_to_millicelsius(t * MILLIDEGREE_PER_DECIDEGREE);
|
||||
}
|
||||
|
||||
static inline long millicelsius_to_deci_kelvin(long t)
|
||||
{
|
||||
t = millicelsius_to_milli_kelvin(t);
|
||||
|
||||
return DIV_ROUND_CLOSEST(t, MILLIDEGREE_PER_DECIDEGREE);
|
||||
}
|
||||
|
||||
static inline long kelvin_to_celsius(long t)
|
||||
{
|
||||
return t + DIV_ROUND_CLOSEST(ABSOLUTE_ZERO_MILLICELSIUS,
|
||||
MILLIDEGREE_PER_DEGREE);
|
||||
}
|
||||
|
||||
static inline long celsius_to_kelvin(long t)
|
||||
{
|
||||
return t - DIV_ROUND_CLOSEST(ABSOLUTE_ZERO_MILLICELSIUS,
|
||||
MILLIDEGREE_PER_DEGREE);
|
||||
}
|
||||
|
||||
#endif /* _LINUX_UNITS_H */
|
|
@ -191,6 +191,12 @@ extern int zlib_deflate_workspacesize (int windowBits, int memLevel);
|
|||
exceed those passed here.
|
||||
*/
|
||||
|
||||
extern int zlib_deflate_dfltcc_enabled (void);
|
||||
/*
|
||||
Returns 1 if Deflate-Conversion facility is installed and enabled,
|
||||
otherwise 0.
|
||||
*/
|
||||
|
||||
/*
|
||||
extern int deflateInit (z_streamp strm, int level);
|
||||
|
||||
|
|
|
@ -88,8 +88,8 @@ DECLARE_EVENT_CLASS(kmem_alloc_node,
|
|||
__entry->node = node;
|
||||
),
|
||||
|
||||
TP_printk("call_site=%lx ptr=%p bytes_req=%zu bytes_alloc=%zu gfp_flags=%s node=%d",
|
||||
__entry->call_site,
|
||||
TP_printk("call_site=%pS ptr=%p bytes_req=%zu bytes_alloc=%zu gfp_flags=%s node=%d",
|
||||
(void *)__entry->call_site,
|
||||
__entry->ptr,
|
||||
__entry->bytes_req,
|
||||
__entry->bytes_alloc,
|
||||
|
|
|
@ -67,8 +67,8 @@ DECLARE_EVENT_CLASS(writeback_page_template,
|
|||
|
||||
TP_fast_assign(
|
||||
strscpy_pad(__entry->name,
|
||||
mapping ? dev_name(inode_to_bdi(mapping->host)->dev) : "(unknown)",
|
||||
32);
|
||||
bdi_dev_name(mapping ? inode_to_bdi(mapping->host) :
|
||||
NULL), 32);
|
||||
__entry->ino = mapping ? mapping->host->i_ino : 0;
|
||||
__entry->index = page->index;
|
||||
),
|
||||
|
@ -111,8 +111,7 @@ DECLARE_EVENT_CLASS(writeback_dirty_inode_template,
|
|||
struct backing_dev_info *bdi = inode_to_bdi(inode);
|
||||
|
||||
/* may be called for files on pseudo FSes w/ unregistered bdi */
|
||||
strscpy_pad(__entry->name,
|
||||
bdi->dev ? dev_name(bdi->dev) : "(unknown)", 32);
|
||||
strscpy_pad(__entry->name, bdi_dev_name(bdi), 32);
|
||||
__entry->ino = inode->i_ino;
|
||||
__entry->state = inode->i_state;
|
||||
__entry->flags = flags;
|
||||
|
@ -193,7 +192,7 @@ TRACE_EVENT(inode_foreign_history,
|
|||
),
|
||||
|
||||
TP_fast_assign(
|
||||
strncpy(__entry->name, dev_name(inode_to_bdi(inode)->dev), 32);
|
||||
strncpy(__entry->name, bdi_dev_name(inode_to_bdi(inode)), 32);
|
||||
__entry->ino = inode->i_ino;
|
||||
__entry->cgroup_ino = __trace_wbc_assign_cgroup(wbc);
|
||||
__entry->history = history;
|
||||
|
@ -222,7 +221,7 @@ TRACE_EVENT(inode_switch_wbs,
|
|||
),
|
||||
|
||||
TP_fast_assign(
|
||||
strncpy(__entry->name, dev_name(old_wb->bdi->dev), 32);
|
||||
strncpy(__entry->name, bdi_dev_name(old_wb->bdi), 32);
|
||||
__entry->ino = inode->i_ino;
|
||||
__entry->old_cgroup_ino = __trace_wb_assign_cgroup(old_wb);
|
||||
__entry->new_cgroup_ino = __trace_wb_assign_cgroup(new_wb);
|
||||
|
@ -255,7 +254,7 @@ TRACE_EVENT(track_foreign_dirty,
|
|||
struct address_space *mapping = page_mapping(page);
|
||||
struct inode *inode = mapping ? mapping->host : NULL;
|
||||
|
||||
strncpy(__entry->name, dev_name(wb->bdi->dev), 32);
|
||||
strncpy(__entry->name, bdi_dev_name(wb->bdi), 32);
|
||||
__entry->bdi_id = wb->bdi->id;
|
||||
__entry->ino = inode ? inode->i_ino : 0;
|
||||
__entry->memcg_id = wb->memcg_css->id;
|
||||
|
@ -288,7 +287,7 @@ TRACE_EVENT(flush_foreign,
|
|||
),
|
||||
|
||||
TP_fast_assign(
|
||||
strncpy(__entry->name, dev_name(wb->bdi->dev), 32);
|
||||
strncpy(__entry->name, bdi_dev_name(wb->bdi), 32);
|
||||
__entry->cgroup_ino = __trace_wb_assign_cgroup(wb);
|
||||
__entry->frn_bdi_id = frn_bdi_id;
|
||||
__entry->frn_memcg_id = frn_memcg_id;
|
||||
|
@ -318,7 +317,7 @@ DECLARE_EVENT_CLASS(writeback_write_inode_template,
|
|||
|
||||
TP_fast_assign(
|
||||
strscpy_pad(__entry->name,
|
||||
dev_name(inode_to_bdi(inode)->dev), 32);
|
||||
bdi_dev_name(inode_to_bdi(inode)), 32);
|
||||
__entry->ino = inode->i_ino;
|
||||
__entry->sync_mode = wbc->sync_mode;
|
||||
__entry->cgroup_ino = __trace_wbc_assign_cgroup(wbc);
|
||||
|
@ -361,9 +360,7 @@ DECLARE_EVENT_CLASS(writeback_work_class,
|
|||
__field(ino_t, cgroup_ino)
|
||||
),
|
||||
TP_fast_assign(
|
||||
strscpy_pad(__entry->name,
|
||||
wb->bdi->dev ? dev_name(wb->bdi->dev) :
|
||||
"(unknown)", 32);
|
||||
strscpy_pad(__entry->name, bdi_dev_name(wb->bdi), 32);
|
||||
__entry->nr_pages = work->nr_pages;
|
||||
__entry->sb_dev = work->sb ? work->sb->s_dev : 0;
|
||||
__entry->sync_mode = work->sync_mode;
|
||||
|
@ -416,7 +413,7 @@ DECLARE_EVENT_CLASS(writeback_class,
|
|||
__field(ino_t, cgroup_ino)
|
||||
),
|
||||
TP_fast_assign(
|
||||
strscpy_pad(__entry->name, dev_name(wb->bdi->dev), 32);
|
||||
strscpy_pad(__entry->name, bdi_dev_name(wb->bdi), 32);
|
||||
__entry->cgroup_ino = __trace_wb_assign_cgroup(wb);
|
||||
),
|
||||
TP_printk("bdi %s: cgroup_ino=%lu",
|
||||
|
@ -438,7 +435,7 @@ TRACE_EVENT(writeback_bdi_register,
|
|||
__array(char, name, 32)
|
||||
),
|
||||
TP_fast_assign(
|
||||
strscpy_pad(__entry->name, dev_name(bdi->dev), 32);
|
||||
strscpy_pad(__entry->name, bdi_dev_name(bdi), 32);
|
||||
),
|
||||
TP_printk("bdi %s",
|
||||
__entry->name
|
||||
|
@ -463,7 +460,7 @@ DECLARE_EVENT_CLASS(wbc_class,
|
|||
),
|
||||
|
||||
TP_fast_assign(
|
||||
strscpy_pad(__entry->name, dev_name(bdi->dev), 32);
|
||||
strscpy_pad(__entry->name, bdi_dev_name(bdi), 32);
|
||||
__entry->nr_to_write = wbc->nr_to_write;
|
||||
__entry->pages_skipped = wbc->pages_skipped;
|
||||
__entry->sync_mode = wbc->sync_mode;
|
||||
|
@ -514,7 +511,7 @@ TRACE_EVENT(writeback_queue_io,
|
|||
),
|
||||
TP_fast_assign(
|
||||
unsigned long *older_than_this = work->older_than_this;
|
||||
strscpy_pad(__entry->name, dev_name(wb->bdi->dev), 32);
|
||||
strscpy_pad(__entry->name, bdi_dev_name(wb->bdi), 32);
|
||||
__entry->older = older_than_this ? *older_than_this : 0;
|
||||
__entry->age = older_than_this ?
|
||||
(jiffies - *older_than_this) * 1000 / HZ : -1;
|
||||
|
@ -600,7 +597,7 @@ TRACE_EVENT(bdi_dirty_ratelimit,
|
|||
),
|
||||
|
||||
TP_fast_assign(
|
||||
strscpy_pad(__entry->bdi, dev_name(wb->bdi->dev), 32);
|
||||
strscpy_pad(__entry->bdi, bdi_dev_name(wb->bdi), 32);
|
||||
__entry->write_bw = KBps(wb->write_bandwidth);
|
||||
__entry->avg_write_bw = KBps(wb->avg_write_bandwidth);
|
||||
__entry->dirty_rate = KBps(dirty_rate);
|
||||
|
@ -665,7 +662,7 @@ TRACE_EVENT(balance_dirty_pages,
|
|||
|
||||
TP_fast_assign(
|
||||
unsigned long freerun = (thresh + bg_thresh) / 2;
|
||||
strscpy_pad(__entry->bdi, dev_name(wb->bdi->dev), 32);
|
||||
strscpy_pad(__entry->bdi, bdi_dev_name(wb->bdi), 32);
|
||||
|
||||
__entry->limit = global_wb_domain.dirty_limit;
|
||||
__entry->setpoint = (global_wb_domain.dirty_limit +
|
||||
|
@ -726,7 +723,7 @@ TRACE_EVENT(writeback_sb_inodes_requeue,
|
|||
|
||||
TP_fast_assign(
|
||||
strscpy_pad(__entry->name,
|
||||
dev_name(inode_to_bdi(inode)->dev), 32);
|
||||
bdi_dev_name(inode_to_bdi(inode)), 32);
|
||||
__entry->ino = inode->i_ino;
|
||||
__entry->state = inode->i_state;
|
||||
__entry->dirtied_when = inode->dirtied_when;
|
||||
|
@ -800,7 +797,7 @@ DECLARE_EVENT_CLASS(writeback_single_inode_template,
|
|||
|
||||
TP_fast_assign(
|
||||
strscpy_pad(__entry->name,
|
||||
dev_name(inode_to_bdi(inode)->dev), 32);
|
||||
bdi_dev_name(inode_to_bdi(inode)), 32);
|
||||
__entry->ino = inode->i_ino;
|
||||
__entry->state = inode->i_state;
|
||||
__entry->dirtied_when = inode->dirtied_when;
|
||||
|
|
|
@ -4,6 +4,7 @@
|
|||
|
||||
#include <linux/types.h>
|
||||
#include <linux/compiler.h>
|
||||
#include <asm/bitsperlong.h>
|
||||
#include <asm/swab.h>
|
||||
|
||||
/*
|
||||
|
@ -132,6 +133,15 @@ static inline __attribute_const__ __u32 __fswahb32(__u32 val)
|
|||
__fswab64(x))
|
||||
#endif
|
||||
|
||||
static __always_inline unsigned long __swab(const unsigned long y)
|
||||
{
|
||||
#if BITS_PER_LONG == 64
|
||||
return __swab64(y);
|
||||
#else /* BITS_PER_LONG == 32 */
|
||||
return __swab32(y);
|
||||
#endif
|
||||
}
|
||||
|
||||
/**
|
||||
* __swahw32 - return a word-swapped 32-bit value
|
||||
* @x: value to wordswap
|
||||
|
|
|
@ -195,7 +195,7 @@ enum
|
|||
VM_MIN_UNMAPPED=32, /* Set min percent of unmapped pages */
|
||||
VM_PANIC_ON_OOM=33, /* panic at out-of-memory */
|
||||
VM_VDSO_ENABLED=34, /* map VDSO into new processes? */
|
||||
VM_MIN_SLAB=35, /* Percent pages ignored by zone reclaim */
|
||||
VM_MIN_SLAB=35, /* Percent pages ignored by node reclaim */
|
||||
};
|
||||
|
||||
|
||||
|
|
36
init/main.c
36
init/main.c
|
@ -246,8 +246,7 @@ static int __init loglevel(char *str)
|
|||
early_param("loglevel", loglevel);
|
||||
|
||||
/* Change NUL term back to "=", to make "param" the whole string. */
|
||||
static int __init repair_env_string(char *param, char *val,
|
||||
const char *unused, void *arg)
|
||||
static void __init repair_env_string(char *param, char *val)
|
||||
{
|
||||
if (val) {
|
||||
/* param=val or param="val"? */
|
||||
|
@ -256,11 +255,9 @@ static int __init repair_env_string(char *param, char *val,
|
|||
else if (val == param+strlen(param)+2) {
|
||||
val[-2] = '=';
|
||||
memmove(val-1, val, strlen(val)+1);
|
||||
val--;
|
||||
} else
|
||||
BUG();
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* Anything after -- gets handed straight to init. */
|
||||
|
@ -272,7 +269,7 @@ static int __init set_init_arg(char *param, char *val,
|
|||
if (panic_later)
|
||||
return 0;
|
||||
|
||||
repair_env_string(param, val, unused, NULL);
|
||||
repair_env_string(param, val);
|
||||
|
||||
for (i = 0; argv_init[i]; i++) {
|
||||
if (i == MAX_INIT_ARGS) {
|
||||
|
@ -292,14 +289,16 @@ static int __init set_init_arg(char *param, char *val,
|
|||
static int __init unknown_bootoption(char *param, char *val,
|
||||
const char *unused, void *arg)
|
||||
{
|
||||
repair_env_string(param, val, unused, NULL);
|
||||
size_t len = strlen(param);
|
||||
|
||||
repair_env_string(param, val);
|
||||
|
||||
/* Handle obsolete-style parameters */
|
||||
if (obsolete_checksetup(param))
|
||||
return 0;
|
||||
|
||||
/* Unused module parameter. */
|
||||
if (strchr(param, '.') && (!val || strchr(param, '.') < val))
|
||||
if (strnchr(param, len, '.'))
|
||||
return 0;
|
||||
|
||||
if (panic_later)
|
||||
|
@ -313,7 +312,7 @@ static int __init unknown_bootoption(char *param, char *val,
|
|||
panic_later = "env";
|
||||
panic_param = param;
|
||||
}
|
||||
if (!strncmp(param, envp_init[i], val - param))
|
||||
if (!strncmp(param, envp_init[i], len+1))
|
||||
break;
|
||||
}
|
||||
envp_init[i] = param;
|
||||
|
@ -991,6 +990,12 @@ static const char *initcall_level_names[] __initdata = {
|
|||
"late",
|
||||
};
|
||||
|
||||
static int __init ignore_unknown_bootoption(char *param, char *val,
|
||||
const char *unused, void *arg)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void __init do_initcall_level(int level)
|
||||
{
|
||||
initcall_entry_t *fn;
|
||||
|
@ -1000,7 +1005,7 @@ static void __init do_initcall_level(int level)
|
|||
initcall_command_line, __start___param,
|
||||
__stop___param - __start___param,
|
||||
level, level,
|
||||
NULL, &repair_env_string);
|
||||
NULL, ignore_unknown_bootoption);
|
||||
|
||||
trace_initcall_level(initcall_level_names[level]);
|
||||
for (fn = initcall_levels[level]; fn < initcall_levels[level+1]; fn++)
|
||||
|
@ -1043,8 +1048,16 @@ static void __init do_pre_smp_initcalls(void)
|
|||
|
||||
static int run_init_process(const char *init_filename)
|
||||
{
|
||||
const char *const *p;
|
||||
|
||||
argv_init[0] = init_filename;
|
||||
pr_info("Run %s as init process\n", init_filename);
|
||||
pr_debug(" with arguments:\n");
|
||||
for (p = argv_init; *p; p++)
|
||||
pr_debug(" %s\n", *p);
|
||||
pr_debug(" with environment:\n");
|
||||
for (p = envp_init; *p; p++)
|
||||
pr_debug(" %s\n", *p);
|
||||
return do_execve(getname_kernel(init_filename),
|
||||
(const char __user *const __user *)argv_init,
|
||||
(const char __user *const __user *)envp_init);
|
||||
|
@ -1091,6 +1104,11 @@ static void mark_readonly(void)
|
|||
} else
|
||||
pr_info("Kernel memory protection disabled.\n");
|
||||
}
|
||||
#elif defined(CONFIG_ARCH_HAS_STRICT_KERNEL_RWX)
|
||||
static inline void mark_readonly(void)
|
||||
{
|
||||
pr_warn("Kernel memory protection not selected by kernel config.\n");
|
||||
}
|
||||
#else
|
||||
static inline void mark_readonly(void)
|
||||
{
|
||||
|
|
|
@ -27,6 +27,7 @@ KCOV_INSTRUMENT_softirq.o := n
|
|||
# and produce insane amounts of uninteresting coverage.
|
||||
KCOV_INSTRUMENT_module.o := n
|
||||
KCOV_INSTRUMENT_extable.o := n
|
||||
KCOV_INSTRUMENT_stacktrace.o := n
|
||||
# Don't self-instrument.
|
||||
KCOV_INSTRUMENT_kcov.o := n
|
||||
KASAN_SANITIZE_kcov.o := n
|
||||
|
|
|
@ -278,6 +278,13 @@ config ZLIB_DEFLATE
|
|||
tristate
|
||||
select BITREVERSE
|
||||
|
||||
config ZLIB_DFLTCC
|
||||
def_bool y
|
||||
depends on S390
|
||||
prompt "Enable s390x DEFLATE CONVERSION CALL support for kernel zlib"
|
||||
help
|
||||
Enable s390x hardware support for zlib in the kernel.
|
||||
|
||||
config LZO_COMPRESS
|
||||
tristate
|
||||
|
||||
|
|
|
@ -16,6 +16,7 @@ KCOV_INSTRUMENT_rbtree.o := n
|
|||
KCOV_INSTRUMENT_list_debug.o := n
|
||||
KCOV_INSTRUMENT_debugobjects.o := n
|
||||
KCOV_INSTRUMENT_dynamic_debug.o := n
|
||||
KCOV_INSTRUMENT_fault-inject.o := n
|
||||
|
||||
# Early boot use of cmdline, don't instrument it
|
||||
ifdef CONFIG_AMD_MEM_ENCRYPT
|
||||
|
@ -140,6 +141,7 @@ obj-$(CONFIG_842_COMPRESS) += 842/
|
|||
obj-$(CONFIG_842_DECOMPRESS) += 842/
|
||||
obj-$(CONFIG_ZLIB_INFLATE) += zlib_inflate/
|
||||
obj-$(CONFIG_ZLIB_DEFLATE) += zlib_deflate/
|
||||
obj-$(CONFIG_ZLIB_DFLTCC) += zlib_dfltcc/
|
||||
obj-$(CONFIG_REED_SOLOMON) += reed_solomon/
|
||||
obj-$(CONFIG_BCH) += bch.o
|
||||
obj-$(CONFIG_LZO_COMPRESS) += lzo/
|
||||
|
|
|
@ -10,6 +10,10 @@
|
|||
#include "zlib_inflate/inftrees.c"
|
||||
#include "zlib_inflate/inffast.c"
|
||||
#include "zlib_inflate/inflate.c"
|
||||
#ifdef CONFIG_ZLIB_DFLTCC
|
||||
#include "zlib_dfltcc/dfltcc.c"
|
||||
#include "zlib_dfltcc/dfltcc_inflate.c"
|
||||
#endif
|
||||
|
||||
#else /* STATIC */
|
||||
/* initramfs et al: linked */
|
||||
|
@ -76,7 +80,12 @@ STATIC int INIT __gunzip(unsigned char *buf, long len,
|
|||
}
|
||||
|
||||
strm->workspace = malloc(flush ? zlib_inflate_workspacesize() :
|
||||
#ifdef CONFIG_ZLIB_DFLTCC
|
||||
/* Always allocate the full workspace for DFLTCC */
|
||||
zlib_inflate_workspacesize());
|
||||
#else
|
||||
sizeof(struct inflate_state));
|
||||
#endif
|
||||
if (strm->workspace == NULL) {
|
||||
error("Out of memory while allocating workspace");
|
||||
goto gunzip_nomem4;
|
||||
|
@ -123,10 +132,14 @@ STATIC int INIT __gunzip(unsigned char *buf, long len,
|
|||
|
||||
rc = zlib_inflateInit2(strm, -MAX_WBITS);
|
||||
|
||||
#ifdef CONFIG_ZLIB_DFLTCC
|
||||
/* Always keep the window for DFLTCC */
|
||||
#else
|
||||
if (!flush) {
|
||||
WS(strm)->inflate_state.wsize = 0;
|
||||
WS(strm)->inflate_state.window = NULL;
|
||||
}
|
||||
#endif
|
||||
|
||||
while (rc == Z_OK) {
|
||||
if (strm->avail_in == 0) {
|
||||
|
|
|
@ -17,9 +17,9 @@
|
|||
#include <linux/export.h>
|
||||
#include <linux/kernel.h>
|
||||
|
||||
#if !defined(find_next_bit) || !defined(find_next_zero_bit) || \
|
||||
!defined(find_next_and_bit)
|
||||
|
||||
#if !defined(find_next_bit) || !defined(find_next_zero_bit) || \
|
||||
!defined(find_next_bit_le) || !defined(find_next_zero_bit_le) || \
|
||||
!defined(find_next_and_bit)
|
||||
/*
|
||||
* This is a common helper function for find_next_bit, find_next_zero_bit, and
|
||||
* find_next_and_bit. The differences are:
|
||||
|
@ -27,11 +27,11 @@
|
|||
* searching it for one bits.
|
||||
* - The optional "addr2", which is anded with "addr1" if present.
|
||||
*/
|
||||
static inline unsigned long _find_next_bit(const unsigned long *addr1,
|
||||
static unsigned long _find_next_bit(const unsigned long *addr1,
|
||||
const unsigned long *addr2, unsigned long nbits,
|
||||
unsigned long start, unsigned long invert)
|
||||
unsigned long start, unsigned long invert, unsigned long le)
|
||||
{
|
||||
unsigned long tmp;
|
||||
unsigned long tmp, mask;
|
||||
|
||||
if (unlikely(start >= nbits))
|
||||
return nbits;
|
||||
|
@ -42,7 +42,12 @@ static inline unsigned long _find_next_bit(const unsigned long *addr1,
|
|||
tmp ^= invert;
|
||||
|
||||
/* Handle 1st word. */
|
||||
tmp &= BITMAP_FIRST_WORD_MASK(start);
|
||||
mask = BITMAP_FIRST_WORD_MASK(start);
|
||||
if (le)
|
||||
mask = swab(mask);
|
||||
|
||||
tmp &= mask;
|
||||
|
||||
start = round_down(start, BITS_PER_LONG);
|
||||
|
||||
while (!tmp) {
|
||||
|
@ -56,6 +61,9 @@ static inline unsigned long _find_next_bit(const unsigned long *addr1,
|
|||
tmp ^= invert;
|
||||
}
|
||||
|
||||
if (le)
|
||||
tmp = swab(tmp);
|
||||
|
||||
return min(start + __ffs(tmp), nbits);
|
||||
}
|
||||
#endif
|
||||
|
@ -67,7 +75,7 @@ static inline unsigned long _find_next_bit(const unsigned long *addr1,
|
|||
unsigned long find_next_bit(const unsigned long *addr, unsigned long size,
|
||||
unsigned long offset)
|
||||
{
|
||||
return _find_next_bit(addr, NULL, size, offset, 0UL);
|
||||
return _find_next_bit(addr, NULL, size, offset, 0UL, 0);
|
||||
}
|
||||
EXPORT_SYMBOL(find_next_bit);
|
||||
#endif
|
||||
|
@ -76,7 +84,7 @@ EXPORT_SYMBOL(find_next_bit);
|
|||
unsigned long find_next_zero_bit(const unsigned long *addr, unsigned long size,
|
||||
unsigned long offset)
|
||||
{
|
||||
return _find_next_bit(addr, NULL, size, offset, ~0UL);
|
||||
return _find_next_bit(addr, NULL, size, offset, ~0UL, 0);
|
||||
}
|
||||
EXPORT_SYMBOL(find_next_zero_bit);
|
||||
#endif
|
||||
|
@ -86,7 +94,7 @@ unsigned long find_next_and_bit(const unsigned long *addr1,
|
|||
const unsigned long *addr2, unsigned long size,
|
||||
unsigned long offset)
|
||||
{
|
||||
return _find_next_bit(addr1, addr2, size, offset, 0UL);
|
||||
return _find_next_bit(addr1, addr2, size, offset, 0UL, 0);
|
||||
}
|
||||
EXPORT_SYMBOL(find_next_and_bit);
|
||||
#endif
|
||||
|
@ -149,57 +157,11 @@ EXPORT_SYMBOL(find_last_bit);
|
|||
|
||||
#ifdef __BIG_ENDIAN
|
||||
|
||||
/* include/linux/byteorder does not support "unsigned long" type */
|
||||
static inline unsigned long ext2_swab(const unsigned long y)
|
||||
{
|
||||
#if BITS_PER_LONG == 64
|
||||
return (unsigned long) __swab64((u64) y);
|
||||
#elif BITS_PER_LONG == 32
|
||||
return (unsigned long) __swab32((u32) y);
|
||||
#else
|
||||
#error BITS_PER_LONG not defined
|
||||
#endif
|
||||
}
|
||||
|
||||
#if !defined(find_next_bit_le) || !defined(find_next_zero_bit_le)
|
||||
static inline unsigned long _find_next_bit_le(const unsigned long *addr1,
|
||||
const unsigned long *addr2, unsigned long nbits,
|
||||
unsigned long start, unsigned long invert)
|
||||
{
|
||||
unsigned long tmp;
|
||||
|
||||
if (unlikely(start >= nbits))
|
||||
return nbits;
|
||||
|
||||
tmp = addr1[start / BITS_PER_LONG];
|
||||
if (addr2)
|
||||
tmp &= addr2[start / BITS_PER_LONG];
|
||||
tmp ^= invert;
|
||||
|
||||
/* Handle 1st word. */
|
||||
tmp &= ext2_swab(BITMAP_FIRST_WORD_MASK(start));
|
||||
start = round_down(start, BITS_PER_LONG);
|
||||
|
||||
while (!tmp) {
|
||||
start += BITS_PER_LONG;
|
||||
if (start >= nbits)
|
||||
return nbits;
|
||||
|
||||
tmp = addr1[start / BITS_PER_LONG];
|
||||
if (addr2)
|
||||
tmp &= addr2[start / BITS_PER_LONG];
|
||||
tmp ^= invert;
|
||||
}
|
||||
|
||||
return min(start + __ffs(ext2_swab(tmp)), nbits);
|
||||
}
|
||||
#endif
|
||||
|
||||
#ifndef find_next_zero_bit_le
|
||||
unsigned long find_next_zero_bit_le(const void *addr, unsigned
|
||||
long size, unsigned long offset)
|
||||
{
|
||||
return _find_next_bit_le(addr, NULL, size, offset, ~0UL);
|
||||
return _find_next_bit(addr, NULL, size, offset, ~0UL, 1);
|
||||
}
|
||||
EXPORT_SYMBOL(find_next_zero_bit_le);
|
||||
#endif
|
||||
|
@ -208,7 +170,7 @@ EXPORT_SYMBOL(find_next_zero_bit_le);
|
|||
unsigned long find_next_bit_le(const void *addr, unsigned
|
||||
long size, unsigned long offset)
|
||||
{
|
||||
return _find_next_bit_le(addr, NULL, size, offset, 0UL);
|
||||
return _find_next_bit(addr, NULL, size, offset, 0UL, 1);
|
||||
}
|
||||
EXPORT_SYMBOL(find_next_bit_le);
|
||||
#endif
|
||||
|
|
|
@ -311,7 +311,7 @@ int __sg_alloc_table(struct sg_table *table, unsigned int nents,
|
|||
if (prv)
|
||||
table->nents = ++table->orig_nents;
|
||||
|
||||
return -ENOMEM;
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
sg_init_table(sg, alloc_size);
|
||||
|
|
|
@ -275,22 +275,23 @@ static void __init test_copy(void)
|
|||
static void __init test_replace(void)
|
||||
{
|
||||
unsigned int nbits = 64;
|
||||
unsigned int nlongs = DIV_ROUND_UP(nbits, BITS_PER_LONG);
|
||||
DECLARE_BITMAP(bmap, 1024);
|
||||
|
||||
bitmap_zero(bmap, 1024);
|
||||
bitmap_replace(bmap, &exp2[0], &exp2[1], exp2_to_exp3_mask, nbits);
|
||||
bitmap_replace(bmap, &exp2[0 * nlongs], &exp2[1 * nlongs], exp2_to_exp3_mask, nbits);
|
||||
expect_eq_bitmap(bmap, exp3_0_1, nbits);
|
||||
|
||||
bitmap_zero(bmap, 1024);
|
||||
bitmap_replace(bmap, &exp2[1], &exp2[0], exp2_to_exp3_mask, nbits);
|
||||
bitmap_replace(bmap, &exp2[1 * nlongs], &exp2[0 * nlongs], exp2_to_exp3_mask, nbits);
|
||||
expect_eq_bitmap(bmap, exp3_1_0, nbits);
|
||||
|
||||
bitmap_fill(bmap, 1024);
|
||||
bitmap_replace(bmap, &exp2[0], &exp2[1], exp2_to_exp3_mask, nbits);
|
||||
bitmap_replace(bmap, &exp2[0 * nlongs], &exp2[1 * nlongs], exp2_to_exp3_mask, nbits);
|
||||
expect_eq_bitmap(bmap, exp3_0_1, nbits);
|
||||
|
||||
bitmap_fill(bmap, 1024);
|
||||
bitmap_replace(bmap, &exp2[1], &exp2[0], exp2_to_exp3_mask, nbits);
|
||||
bitmap_replace(bmap, &exp2[1 * nlongs], &exp2[0 * nlongs], exp2_to_exp3_mask, nbits);
|
||||
expect_eq_bitmap(bmap, exp3_1_0, nbits);
|
||||
}
|
||||
|
||||
|
|
|
@ -158,6 +158,7 @@ static noinline void __init kmalloc_oob_krealloc_more(void)
|
|||
if (!ptr1 || !ptr2) {
|
||||
pr_err("Allocation failed\n");
|
||||
kfree(ptr1);
|
||||
kfree(ptr2);
|
||||
return;
|
||||
}
|
||||
|
||||
|
|
|
@ -52,16 +52,19 @@
|
|||
#include <linux/zutil.h>
|
||||
#include "defutil.h"
|
||||
|
||||
/* architecture-specific bits */
|
||||
#ifdef CONFIG_ZLIB_DFLTCC
|
||||
# include "../zlib_dfltcc/dfltcc.h"
|
||||
#else
|
||||
#define DEFLATE_RESET_HOOK(strm) do {} while (0)
|
||||
#define DEFLATE_HOOK(strm, flush, bstate) 0
|
||||
#define DEFLATE_NEED_CHECKSUM(strm) 1
|
||||
#define DEFLATE_DFLTCC_ENABLED() 0
|
||||
#endif
|
||||
|
||||
/* ===========================================================================
|
||||
* Function prototypes.
|
||||
*/
|
||||
typedef enum {
|
||||
need_more, /* block not completed, need more input or more output */
|
||||
block_done, /* block flush performed */
|
||||
finish_started, /* finish started, need only more output at next deflate */
|
||||
finish_done /* finish done, accept no more input or output */
|
||||
} block_state;
|
||||
|
||||
typedef block_state (*compress_func) (deflate_state *s, int flush);
|
||||
/* Compression function. Returns the block state after the call. */
|
||||
|
@ -72,7 +75,6 @@ static block_state deflate_fast (deflate_state *s, int flush);
|
|||
static block_state deflate_slow (deflate_state *s, int flush);
|
||||
static void lm_init (deflate_state *s);
|
||||
static void putShortMSB (deflate_state *s, uInt b);
|
||||
static void flush_pending (z_streamp strm);
|
||||
static int read_buf (z_streamp strm, Byte *buf, unsigned size);
|
||||
static uInt longest_match (deflate_state *s, IPos cur_match);
|
||||
|
||||
|
@ -98,6 +100,25 @@ static void check_match (deflate_state *s, IPos start, IPos match,
|
|||
* See deflate.c for comments about the MIN_MATCH+1.
|
||||
*/
|
||||
|
||||
/* Workspace to be allocated for deflate processing */
|
||||
typedef struct deflate_workspace {
|
||||
/* State memory for the deflator */
|
||||
deflate_state deflate_memory;
|
||||
#ifdef CONFIG_ZLIB_DFLTCC
|
||||
/* State memory for s390 hardware deflate */
|
||||
struct dfltcc_state dfltcc_memory;
|
||||
#endif
|
||||
Byte *window_memory;
|
||||
Pos *prev_memory;
|
||||
Pos *head_memory;
|
||||
char *overlay_memory;
|
||||
} deflate_workspace;
|
||||
|
||||
#ifdef CONFIG_ZLIB_DFLTCC
|
||||
/* dfltcc_state must be doubleword aligned for DFLTCC call */
|
||||
static_assert(offsetof(struct deflate_workspace, dfltcc_memory) % 8 == 0);
|
||||
#endif
|
||||
|
||||
/* Values for max_lazy_match, good_match and max_chain_length, depending on
|
||||
* the desired pack level (0..9). The values given below have been tuned to
|
||||
* exclude worst case performance for pathological files. Better values may be
|
||||
|
@ -207,7 +228,15 @@ int zlib_deflateInit2(
|
|||
*/
|
||||
next = (char *) mem;
|
||||
next += sizeof(*mem);
|
||||
#ifdef CONFIG_ZLIB_DFLTCC
|
||||
/*
|
||||
* DFLTCC requires the window to be page aligned.
|
||||
* Thus, we overallocate and take the aligned portion of the buffer.
|
||||
*/
|
||||
mem->window_memory = (Byte *) PTR_ALIGN(next, PAGE_SIZE);
|
||||
#else
|
||||
mem->window_memory = (Byte *) next;
|
||||
#endif
|
||||
next += zlib_deflate_window_memsize(windowBits);
|
||||
mem->prev_memory = (Pos *) next;
|
||||
next += zlib_deflate_prev_memsize(windowBits);
|
||||
|
@ -277,6 +306,8 @@ int zlib_deflateReset(
|
|||
zlib_tr_init(s);
|
||||
lm_init(s);
|
||||
|
||||
DEFLATE_RESET_HOOK(strm);
|
||||
|
||||
return Z_OK;
|
||||
}
|
||||
|
||||
|
@ -294,35 +325,6 @@ static void putShortMSB(
|
|||
put_byte(s, (Byte)(b & 0xff));
|
||||
}
|
||||
|
||||
/* =========================================================================
|
||||
* Flush as much pending output as possible. All deflate() output goes
|
||||
* through this function so some applications may wish to modify it
|
||||
* to avoid allocating a large strm->next_out buffer and copying into it.
|
||||
* (See also read_buf()).
|
||||
*/
|
||||
static void flush_pending(
|
||||
z_streamp strm
|
||||
)
|
||||
{
|
||||
deflate_state *s = (deflate_state *) strm->state;
|
||||
unsigned len = s->pending;
|
||||
|
||||
if (len > strm->avail_out) len = strm->avail_out;
|
||||
if (len == 0) return;
|
||||
|
||||
if (strm->next_out != NULL) {
|
||||
memcpy(strm->next_out, s->pending_out, len);
|
||||
strm->next_out += len;
|
||||
}
|
||||
s->pending_out += len;
|
||||
strm->total_out += len;
|
||||
strm->avail_out -= len;
|
||||
s->pending -= len;
|
||||
if (s->pending == 0) {
|
||||
s->pending_out = s->pending_buf;
|
||||
}
|
||||
}
|
||||
|
||||
/* ========================================================================= */
|
||||
int zlib_deflate(
|
||||
z_streamp strm,
|
||||
|
@ -404,7 +406,8 @@ int zlib_deflate(
|
|||
(flush != Z_NO_FLUSH && s->status != FINISH_STATE)) {
|
||||
block_state bstate;
|
||||
|
||||
bstate = (*(configuration_table[s->level].func))(s, flush);
|
||||
bstate = DEFLATE_HOOK(strm, flush, &bstate) ? bstate :
|
||||
(*(configuration_table[s->level].func))(s, flush);
|
||||
|
||||
if (bstate == finish_started || bstate == finish_done) {
|
||||
s->status = FINISH_STATE;
|
||||
|
@ -503,7 +506,8 @@ static int read_buf(
|
|||
|
||||
strm->avail_in -= len;
|
||||
|
||||
if (!((deflate_state *)(strm->state))->noheader) {
|
||||
if (!DEFLATE_NEED_CHECKSUM(strm)) {}
|
||||
else if (!((deflate_state *)(strm->state))->noheader) {
|
||||
strm->adler = zlib_adler32(strm->adler, strm->next_in, len);
|
||||
}
|
||||
memcpy(buf, strm->next_in, len);
|
||||
|
@ -1135,3 +1139,8 @@ int zlib_deflate_workspacesize(int windowBits, int memLevel)
|
|||
+ zlib_deflate_head_memsize(memLevel)
|
||||
+ zlib_deflate_overlay_memsize(memLevel);
|
||||
}
|
||||
|
||||
int zlib_deflate_dfltcc_enabled(void)
|
||||
{
|
||||
return DEFLATE_DFLTCC_ENABLED();
|
||||
}
|
||||
|
|
|
@ -12,6 +12,7 @@
|
|||
#include <linux/zlib.h>
|
||||
|
||||
EXPORT_SYMBOL(zlib_deflate_workspacesize);
|
||||
EXPORT_SYMBOL(zlib_deflate_dfltcc_enabled);
|
||||
EXPORT_SYMBOL(zlib_deflate);
|
||||
EXPORT_SYMBOL(zlib_deflateInit2);
|
||||
EXPORT_SYMBOL(zlib_deflateEnd);
|
||||
|
|
|
@ -76,11 +76,6 @@ static const uch bl_order[BL_CODES]
|
|||
* probability, to avoid transmitting the lengths for unused bit length codes.
|
||||
*/
|
||||
|
||||
#define Buf_size (8 * 2*sizeof(char))
|
||||
/* Number of bits used within bi_buf. (bi_buf might be implemented on
|
||||
* more than 16 bits on some systems.)
|
||||
*/
|
||||
|
||||
/* ===========================================================================
|
||||
* Local data. These are initialized only once.
|
||||
*/
|
||||
|
@ -147,7 +142,6 @@ static void send_all_trees (deflate_state *s, int lcodes, int dcodes,
|
|||
static void compress_block (deflate_state *s, ct_data *ltree,
|
||||
ct_data *dtree);
|
||||
static void set_data_type (deflate_state *s);
|
||||
static void bi_windup (deflate_state *s);
|
||||
static void bi_flush (deflate_state *s);
|
||||
static void copy_block (deflate_state *s, char *buf, unsigned len,
|
||||
int header);
|
||||
|
@ -169,54 +163,6 @@ static void copy_block (deflate_state *s, char *buf, unsigned len,
|
|||
* used.
|
||||
*/
|
||||
|
||||
/* ===========================================================================
|
||||
* Send a value on a given number of bits.
|
||||
* IN assertion: length <= 16 and value fits in length bits.
|
||||
*/
|
||||
#ifdef DEBUG_ZLIB
|
||||
static void send_bits (deflate_state *s, int value, int length);
|
||||
|
||||
static void send_bits(
|
||||
deflate_state *s,
|
||||
int value, /* value to send */
|
||||
int length /* number of bits */
|
||||
)
|
||||
{
|
||||
Tracevv((stderr," l %2d v %4x ", length, value));
|
||||
Assert(length > 0 && length <= 15, "invalid length");
|
||||
s->bits_sent += (ulg)length;
|
||||
|
||||
/* If not enough room in bi_buf, use (valid) bits from bi_buf and
|
||||
* (16 - bi_valid) bits from value, leaving (width - (16-bi_valid))
|
||||
* unused bits in value.
|
||||
*/
|
||||
if (s->bi_valid > (int)Buf_size - length) {
|
||||
s->bi_buf |= (value << s->bi_valid);
|
||||
put_short(s, s->bi_buf);
|
||||
s->bi_buf = (ush)value >> (Buf_size - s->bi_valid);
|
||||
s->bi_valid += length - Buf_size;
|
||||
} else {
|
||||
s->bi_buf |= value << s->bi_valid;
|
||||
s->bi_valid += length;
|
||||
}
|
||||
}
|
||||
#else /* !DEBUG_ZLIB */
|
||||
|
||||
#define send_bits(s, value, length) \
|
||||
{ int len = length;\
|
||||
if (s->bi_valid > (int)Buf_size - len) {\
|
||||
int val = value;\
|
||||
s->bi_buf |= (val << s->bi_valid);\
|
||||
put_short(s, s->bi_buf);\
|
||||
s->bi_buf = (ush)val >> (Buf_size - s->bi_valid);\
|
||||
s->bi_valid += len - Buf_size;\
|
||||
} else {\
|
||||
s->bi_buf |= (value) << s->bi_valid;\
|
||||
s->bi_valid += len;\
|
||||
}\
|
||||
}
|
||||
#endif /* DEBUG_ZLIB */
|
||||
|
||||
/* ===========================================================================
|
||||
* Initialize the various 'constant' tables. In a multi-threaded environment,
|
||||
* this function may be called by two threads concurrently, but this is
|
||||
|
|
|
@ -1,5 +1,7 @@
|
|||
#ifndef DEFUTIL_H
|
||||
#define DEFUTIL_H
|
||||
|
||||
|
||||
#include <linux/zutil.h>
|
||||
|
||||
#define Assert(err, str)
|
||||
#define Trace(dummy)
|
||||
|
@ -238,17 +240,13 @@ typedef struct deflate_state {
|
|||
|
||||
} deflate_state;
|
||||
|
||||
typedef struct deflate_workspace {
|
||||
/* State memory for the deflator */
|
||||
deflate_state deflate_memory;
|
||||
Byte *window_memory;
|
||||
Pos *prev_memory;
|
||||
Pos *head_memory;
|
||||
char *overlay_memory;
|
||||
} deflate_workspace;
|
||||
|
||||
#ifdef CONFIG_ZLIB_DFLTCC
|
||||
#define zlib_deflate_window_memsize(windowBits) \
|
||||
(2 * (1 << (windowBits)) * sizeof(Byte) + PAGE_SIZE)
|
||||
#else
|
||||
#define zlib_deflate_window_memsize(windowBits) \
|
||||
(2 * (1 << (windowBits)) * sizeof(Byte))
|
||||
#endif
|
||||
#define zlib_deflate_prev_memsize(windowBits) \
|
||||
((1 << (windowBits)) * sizeof(Pos))
|
||||
#define zlib_deflate_head_memsize(memLevel) \
|
||||
|
@ -292,6 +290,24 @@ void zlib_tr_stored_type_only (deflate_state *);
|
|||
put_byte(s, (uch)((ush)(w) >> 8)); \
|
||||
}
|
||||
|
||||
/* ===========================================================================
|
||||
* Reverse the first len bits of a code, using straightforward code (a faster
|
||||
* method would use a table)
|
||||
* IN assertion: 1 <= len <= 15
|
||||
*/
|
||||
static inline unsigned bi_reverse(
|
||||
unsigned code, /* the value to invert */
|
||||
int len /* its bit length */
|
||||
)
|
||||
{
|
||||
register unsigned res = 0;
|
||||
do {
|
||||
res |= code & 1;
|
||||
code >>= 1, res <<= 1;
|
||||
} while (--len > 0);
|
||||
return res >> 1;
|
||||
}
|
||||
|
||||
/* ===========================================================================
|
||||
* Flush the bit buffer, keeping at most 7 bits in it.
|
||||
*/
|
||||
|
@ -325,3 +341,101 @@ static inline void bi_windup(deflate_state *s)
|
|||
#endif
|
||||
}
|
||||
|
||||
typedef enum {
|
||||
need_more, /* block not completed, need more input or more output */
|
||||
block_done, /* block flush performed */
|
||||
finish_started, /* finish started, need only more output at next deflate */
|
||||
finish_done /* finish done, accept no more input or output */
|
||||
} block_state;
|
||||
|
||||
#define Buf_size (8 * 2*sizeof(char))
|
||||
/* Number of bits used within bi_buf. (bi_buf might be implemented on
|
||||
* more than 16 bits on some systems.)
|
||||
*/
|
||||
|
||||
/* ===========================================================================
|
||||
* Send a value on a given number of bits.
|
||||
* IN assertion: length <= 16 and value fits in length bits.
|
||||
*/
|
||||
#ifdef DEBUG_ZLIB
|
||||
static void send_bits (deflate_state *s, int value, int length);
|
||||
|
||||
static void send_bits(
|
||||
deflate_state *s,
|
||||
int value, /* value to send */
|
||||
int length /* number of bits */
|
||||
)
|
||||
{
|
||||
Tracevv((stderr," l %2d v %4x ", length, value));
|
||||
Assert(length > 0 && length <= 15, "invalid length");
|
||||
s->bits_sent += (ulg)length;
|
||||
|
||||
/* If not enough room in bi_buf, use (valid) bits from bi_buf and
|
||||
* (16 - bi_valid) bits from value, leaving (width - (16-bi_valid))
|
||||
* unused bits in value.
|
||||
*/
|
||||
if (s->bi_valid > (int)Buf_size - length) {
|
||||
s->bi_buf |= (value << s->bi_valid);
|
||||
put_short(s, s->bi_buf);
|
||||
s->bi_buf = (ush)value >> (Buf_size - s->bi_valid);
|
||||
s->bi_valid += length - Buf_size;
|
||||
} else {
|
||||
s->bi_buf |= value << s->bi_valid;
|
||||
s->bi_valid += length;
|
||||
}
|
||||
}
|
||||
#else /* !DEBUG_ZLIB */
|
||||
|
||||
#define send_bits(s, value, length) \
|
||||
{ int len = length;\
|
||||
if (s->bi_valid > (int)Buf_size - len) {\
|
||||
int val = value;\
|
||||
s->bi_buf |= (val << s->bi_valid);\
|
||||
put_short(s, s->bi_buf);\
|
||||
s->bi_buf = (ush)val >> (Buf_size - s->bi_valid);\
|
||||
s->bi_valid += len - Buf_size;\
|
||||
} else {\
|
||||
s->bi_buf |= (value) << s->bi_valid;\
|
||||
s->bi_valid += len;\
|
||||
}\
|
||||
}
|
||||
#endif /* DEBUG_ZLIB */
|
||||
|
||||
static inline void zlib_tr_send_bits(
|
||||
deflate_state *s,
|
||||
int value,
|
||||
int length
|
||||
)
|
||||
{
|
||||
send_bits(s, value, length);
|
||||
}
|
||||
|
||||
/* =========================================================================
|
||||
* Flush as much pending output as possible. All deflate() output goes
|
||||
* through this function so some applications may wish to modify it
|
||||
* to avoid allocating a large strm->next_out buffer and copying into it.
|
||||
* (See also read_buf()).
|
||||
*/
|
||||
static inline void flush_pending(
|
||||
z_streamp strm
|
||||
)
|
||||
{
|
||||
deflate_state *s = (deflate_state *) strm->state;
|
||||
unsigned len = s->pending;
|
||||
|
||||
if (len > strm->avail_out) len = strm->avail_out;
|
||||
if (len == 0) return;
|
||||
|
||||
if (strm->next_out != NULL) {
|
||||
memcpy(strm->next_out, s->pending_out, len);
|
||||
strm->next_out += len;
|
||||
}
|
||||
s->pending_out += len;
|
||||
strm->total_out += len;
|
||||
strm->avail_out -= len;
|
||||
s->pending -= len;
|
||||
if (s->pending == 0) {
|
||||
s->pending_out = s->pending_buf;
|
||||
}
|
||||
}
|
||||
#endif /* DEFUTIL_H */
|
||||
|
|
11
lib/zlib_dfltcc/Makefile
Normal file
11
lib/zlib_dfltcc/Makefile
Normal file
|
@ -0,0 +1,11 @@
|
|||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
# This is a modified version of zlib, which does all memory
|
||||
# allocation ahead of time.
|
||||
#
|
||||
# This is the code for s390 zlib hardware support.
|
||||
#
|
||||
|
||||
obj-$(CONFIG_ZLIB_DFLTCC) += zlib_dfltcc.o
|
||||
|
||||
zlib_dfltcc-objs := dfltcc.o dfltcc_deflate.o dfltcc_inflate.o dfltcc_syms.o
|
55
lib/zlib_dfltcc/dfltcc.c
Normal file
55
lib/zlib_dfltcc/dfltcc.c
Normal file
|
@ -0,0 +1,55 @@
|
|||
// SPDX-License-Identifier: Zlib
|
||||
/* dfltcc.c - SystemZ DEFLATE CONVERSION CALL support. */
|
||||
|
||||
#include <linux/zutil.h>
|
||||
#include "dfltcc_util.h"
|
||||
#include "dfltcc.h"
|
||||
|
||||
char *oesc_msg(
|
||||
char *buf,
|
||||
int oesc
|
||||
)
|
||||
{
|
||||
if (oesc == 0x00)
|
||||
return NULL; /* Successful completion */
|
||||
else {
|
||||
#ifdef STATIC
|
||||
return NULL; /* Ignore for pre-boot decompressor */
|
||||
#else
|
||||
sprintf(buf, "Operation-Ending-Supplemental Code is 0x%.2X", oesc);
|
||||
return buf;
|
||||
#endif
|
||||
}
|
||||
}
|
||||
|
||||
void dfltcc_reset(
|
||||
z_streamp strm,
|
||||
uInt size
|
||||
)
|
||||
{
|
||||
struct dfltcc_state *dfltcc_state =
|
||||
(struct dfltcc_state *)((char *)strm->state + size);
|
||||
struct dfltcc_qaf_param *param =
|
||||
(struct dfltcc_qaf_param *)&dfltcc_state->param;
|
||||
|
||||
/* Initialize available functions */
|
||||
if (is_dfltcc_enabled()) {
|
||||
dfltcc(DFLTCC_QAF, param, NULL, NULL, NULL, NULL, NULL);
|
||||
memmove(&dfltcc_state->af, param, sizeof(dfltcc_state->af));
|
||||
} else
|
||||
memset(&dfltcc_state->af, 0, sizeof(dfltcc_state->af));
|
||||
|
||||
/* Initialize parameter block */
|
||||
memset(&dfltcc_state->param, 0, sizeof(dfltcc_state->param));
|
||||
dfltcc_state->param.nt = 1;
|
||||
|
||||
/* Initialize tuning parameters */
|
||||
if (zlib_dfltcc_support == ZLIB_DFLTCC_FULL_DEBUG)
|
||||
dfltcc_state->level_mask = DFLTCC_LEVEL_MASK_DEBUG;
|
||||
else
|
||||
dfltcc_state->level_mask = DFLTCC_LEVEL_MASK;
|
||||
dfltcc_state->block_size = DFLTCC_BLOCK_SIZE;
|
||||
dfltcc_state->block_threshold = DFLTCC_FIRST_FHT_BLOCK_SIZE;
|
||||
dfltcc_state->dht_threshold = DFLTCC_DHT_MIN_SAMPLE_SIZE;
|
||||
dfltcc_state->param.ribm = DFLTCC_RIBM;
|
||||
}
|
155
lib/zlib_dfltcc/dfltcc.h
Normal file
155
lib/zlib_dfltcc/dfltcc.h
Normal file
|
@ -0,0 +1,155 @@
|
|||
// SPDX-License-Identifier: Zlib
|
||||
#ifndef DFLTCC_H
|
||||
#define DFLTCC_H
|
||||
|
||||
#include "../zlib_deflate/defutil.h"
|
||||
#include <asm/facility.h>
|
||||
#include <asm/setup.h>
|
||||
|
||||
/*
|
||||
* Tuning parameters.
|
||||
*/
|
||||
#define DFLTCC_LEVEL_MASK 0x2 /* DFLTCC compression for level 1 only */
|
||||
#define DFLTCC_LEVEL_MASK_DEBUG 0x3fe /* DFLTCC compression for all levels */
|
||||
#define DFLTCC_BLOCK_SIZE 1048576
|
||||
#define DFLTCC_FIRST_FHT_BLOCK_SIZE 4096
|
||||
#define DFLTCC_DHT_MIN_SAMPLE_SIZE 4096
|
||||
#define DFLTCC_RIBM 0
|
||||
|
||||
#define DFLTCC_FACILITY 151
|
||||
|
||||
/*
|
||||
* Parameter Block for Query Available Functions.
|
||||
*/
|
||||
struct dfltcc_qaf_param {
|
||||
char fns[16];
|
||||
char reserved1[8];
|
||||
char fmts[2];
|
||||
char reserved2[6];
|
||||
};
|
||||
|
||||
static_assert(sizeof(struct dfltcc_qaf_param) == 32);
|
||||
|
||||
#define DFLTCC_FMT0 0
|
||||
|
||||
/*
|
||||
* Parameter Block for Generate Dynamic-Huffman Table, Compress and Expand.
|
||||
*/
|
||||
struct dfltcc_param_v0 {
|
||||
uint16_t pbvn; /* Parameter-Block-Version Number */
|
||||
uint8_t mvn; /* Model-Version Number */
|
||||
uint8_t ribm; /* Reserved for IBM use */
|
||||
unsigned reserved32 : 31;
|
||||
unsigned cf : 1; /* Continuation Flag */
|
||||
uint8_t reserved64[8];
|
||||
unsigned nt : 1; /* New Task */
|
||||
unsigned reserved129 : 1;
|
||||
unsigned cvt : 1; /* Check Value Type */
|
||||
unsigned reserved131 : 1;
|
||||
unsigned htt : 1; /* Huffman-Table Type */
|
||||
unsigned bcf : 1; /* Block-Continuation Flag */
|
||||
unsigned bcc : 1; /* Block Closing Control */
|
||||
unsigned bhf : 1; /* Block Header Final */
|
||||
unsigned reserved136 : 1;
|
||||
unsigned reserved137 : 1;
|
||||
unsigned dhtgc : 1; /* DHT Generation Control */
|
||||
unsigned reserved139 : 5;
|
||||
unsigned reserved144 : 5;
|
||||
unsigned sbb : 3; /* Sub-Byte Boundary */
|
||||
uint8_t oesc; /* Operation-Ending-Supplemental Code */
|
||||
unsigned reserved160 : 12;
|
||||
unsigned ifs : 4; /* Incomplete-Function Status */
|
||||
uint16_t ifl; /* Incomplete-Function Length */
|
||||
uint8_t reserved192[8];
|
||||
uint8_t reserved256[8];
|
||||
uint8_t reserved320[4];
|
||||
uint16_t hl; /* History Length */
|
||||
unsigned reserved368 : 1;
|
||||
uint16_t ho : 15; /* History Offset */
|
||||
uint32_t cv; /* Check Value */
|
||||
unsigned eobs : 15; /* End-of-block Symbol */
|
||||
unsigned reserved431: 1;
|
||||
uint8_t eobl : 4; /* End-of-block Length */
|
||||
unsigned reserved436 : 12;
|
||||
unsigned reserved448 : 4;
|
||||
uint16_t cdhtl : 12; /* Compressed-Dynamic-Huffman Table
|
||||
Length */
|
||||
uint8_t reserved464[6];
|
||||
uint8_t cdht[288];
|
||||
uint8_t reserved[32];
|
||||
uint8_t csb[1152];
|
||||
};
|
||||
|
||||
static_assert(sizeof(struct dfltcc_param_v0) == 1536);
|
||||
|
||||
#define CVT_CRC32 0
|
||||
#define CVT_ADLER32 1
|
||||
#define HTT_FIXED 0
|
||||
#define HTT_DYNAMIC 1
|
||||
|
||||
/*
|
||||
* Extension of inflate_state and deflate_state for DFLTCC.
|
||||
*/
|
||||
struct dfltcc_state {
|
||||
struct dfltcc_param_v0 param; /* Parameter block */
|
||||
struct dfltcc_qaf_param af; /* Available functions */
|
||||
uLong level_mask; /* Levels on which to use DFLTCC */
|
||||
uLong block_size; /* New block each X bytes */
|
||||
uLong block_threshold; /* New block after total_in > X */
|
||||
uLong dht_threshold; /* New block only if avail_in >= X */
|
||||
char msg[64]; /* Buffer for strm->msg */
|
||||
};
|
||||
|
||||
/* Resides right after inflate_state or deflate_state */
|
||||
#define GET_DFLTCC_STATE(state) ((struct dfltcc_state *)((state) + 1))
|
||||
|
||||
/* External functions */
|
||||
int dfltcc_can_deflate(z_streamp strm);
|
||||
int dfltcc_deflate(z_streamp strm,
|
||||
int flush,
|
||||
block_state *result);
|
||||
void dfltcc_reset(z_streamp strm, uInt size);
|
||||
int dfltcc_can_inflate(z_streamp strm);
|
||||
typedef enum {
|
||||
DFLTCC_INFLATE_CONTINUE,
|
||||
DFLTCC_INFLATE_BREAK,
|
||||
DFLTCC_INFLATE_SOFTWARE,
|
||||
} dfltcc_inflate_action;
|
||||
dfltcc_inflate_action dfltcc_inflate(z_streamp strm,
|
||||
int flush, int *ret);
|
||||
static inline int is_dfltcc_enabled(void)
|
||||
{
|
||||
return (zlib_dfltcc_support != ZLIB_DFLTCC_DISABLED &&
|
||||
test_facility(DFLTCC_FACILITY));
|
||||
}
|
||||
|
||||
#define DEFLATE_RESET_HOOK(strm) \
|
||||
dfltcc_reset((strm), sizeof(deflate_state))
|
||||
|
||||
#define DEFLATE_HOOK dfltcc_deflate
|
||||
|
||||
#define DEFLATE_NEED_CHECKSUM(strm) (!dfltcc_can_deflate((strm)))
|
||||
|
||||
#define DEFLATE_DFLTCC_ENABLED() is_dfltcc_enabled()
|
||||
|
||||
#define INFLATE_RESET_HOOK(strm) \
|
||||
dfltcc_reset((strm), sizeof(struct inflate_state))
|
||||
|
||||
#define INFLATE_TYPEDO_HOOK(strm, flush) \
|
||||
if (dfltcc_can_inflate((strm))) { \
|
||||
dfltcc_inflate_action action; \
|
||||
\
|
||||
RESTORE(); \
|
||||
action = dfltcc_inflate((strm), (flush), &ret); \
|
||||
LOAD(); \
|
||||
if (action == DFLTCC_INFLATE_CONTINUE) \
|
||||
break; \
|
||||
else if (action == DFLTCC_INFLATE_BREAK) \
|
||||
goto inf_leave; \
|
||||
}
|
||||
|
||||
#define INFLATE_NEED_CHECKSUM(strm) (!dfltcc_can_inflate((strm)))
|
||||
|
||||
#define INFLATE_NEED_UPDATEWINDOW(strm) (!dfltcc_can_inflate((strm)))
|
||||
|
||||
#endif /* DFLTCC_H */
|
279
lib/zlib_dfltcc/dfltcc_deflate.c
Normal file
279
lib/zlib_dfltcc/dfltcc_deflate.c
Normal file
|
@ -0,0 +1,279 @@
|
|||
// SPDX-License-Identifier: Zlib
|
||||
|
||||
#include "../zlib_deflate/defutil.h"
|
||||
#include "dfltcc_util.h"
|
||||
#include "dfltcc.h"
|
||||
#include <asm/setup.h>
|
||||
#include <linux/zutil.h>
|
||||
|
||||
/*
|
||||
* Compress.
|
||||
*/
|
||||
int dfltcc_can_deflate(
|
||||
z_streamp strm
|
||||
)
|
||||
{
|
||||
deflate_state *state = (deflate_state *)strm->state;
|
||||
struct dfltcc_state *dfltcc_state = GET_DFLTCC_STATE(state);
|
||||
|
||||
/* Check for kernel dfltcc command line parameter */
|
||||
if (zlib_dfltcc_support == ZLIB_DFLTCC_DISABLED ||
|
||||
zlib_dfltcc_support == ZLIB_DFLTCC_INFLATE_ONLY)
|
||||
return 0;
|
||||
|
||||
/* Unsupported compression settings */
|
||||
if (!dfltcc_are_params_ok(state->level, state->w_bits, state->strategy,
|
||||
dfltcc_state->level_mask))
|
||||
return 0;
|
||||
|
||||
/* Unsupported hardware */
|
||||
if (!is_bit_set(dfltcc_state->af.fns, DFLTCC_GDHT) ||
|
||||
!is_bit_set(dfltcc_state->af.fns, DFLTCC_CMPR) ||
|
||||
!is_bit_set(dfltcc_state->af.fmts, DFLTCC_FMT0))
|
||||
return 0;
|
||||
|
||||
return 1;
|
||||
}
|
||||
|
||||
static void dfltcc_gdht(
|
||||
z_streamp strm
|
||||
)
|
||||
{
|
||||
deflate_state *state = (deflate_state *)strm->state;
|
||||
struct dfltcc_param_v0 *param = &GET_DFLTCC_STATE(state)->param;
|
||||
size_t avail_in = avail_in = strm->avail_in;
|
||||
|
||||
dfltcc(DFLTCC_GDHT,
|
||||
param, NULL, NULL,
|
||||
&strm->next_in, &avail_in, NULL);
|
||||
}
|
||||
|
||||
static dfltcc_cc dfltcc_cmpr(
|
||||
z_streamp strm
|
||||
)
|
||||
{
|
||||
deflate_state *state = (deflate_state *)strm->state;
|
||||
struct dfltcc_param_v0 *param = &GET_DFLTCC_STATE(state)->param;
|
||||
size_t avail_in = strm->avail_in;
|
||||
size_t avail_out = strm->avail_out;
|
||||
dfltcc_cc cc;
|
||||
|
||||
cc = dfltcc(DFLTCC_CMPR | HBT_CIRCULAR,
|
||||
param, &strm->next_out, &avail_out,
|
||||
&strm->next_in, &avail_in, state->window);
|
||||
strm->total_in += (strm->avail_in - avail_in);
|
||||
strm->total_out += (strm->avail_out - avail_out);
|
||||
strm->avail_in = avail_in;
|
||||
strm->avail_out = avail_out;
|
||||
return cc;
|
||||
}
|
||||
|
||||
static void send_eobs(
|
||||
z_streamp strm,
|
||||
const struct dfltcc_param_v0 *param
|
||||
)
|
||||
{
|
||||
deflate_state *state = (deflate_state *)strm->state;
|
||||
|
||||
zlib_tr_send_bits(
|
||||
state,
|
||||
bi_reverse(param->eobs >> (15 - param->eobl), param->eobl),
|
||||
param->eobl);
|
||||
flush_pending(strm);
|
||||
if (state->pending != 0) {
|
||||
/* The remaining data is located in pending_out[0:pending]. If someone
|
||||
* calls put_byte() - this might happen in deflate() - the byte will be
|
||||
* placed into pending_buf[pending], which is incorrect. Move the
|
||||
* remaining data to the beginning of pending_buf so that put_byte() is
|
||||
* usable again.
|
||||
*/
|
||||
memmove(state->pending_buf, state->pending_out, state->pending);
|
||||
state->pending_out = state->pending_buf;
|
||||
}
|
||||
#ifdef ZLIB_DEBUG
|
||||
state->compressed_len += param->eobl;
|
||||
#endif
|
||||
}
|
||||
|
||||
int dfltcc_deflate(
|
||||
z_streamp strm,
|
||||
int flush,
|
||||
block_state *result
|
||||
)
|
||||
{
|
||||
deflate_state *state = (deflate_state *)strm->state;
|
||||
struct dfltcc_state *dfltcc_state = GET_DFLTCC_STATE(state);
|
||||
struct dfltcc_param_v0 *param = &dfltcc_state->param;
|
||||
uInt masked_avail_in;
|
||||
dfltcc_cc cc;
|
||||
int need_empty_block;
|
||||
int soft_bcc;
|
||||
int no_flush;
|
||||
|
||||
if (!dfltcc_can_deflate(strm))
|
||||
return 0;
|
||||
|
||||
again:
|
||||
masked_avail_in = 0;
|
||||
soft_bcc = 0;
|
||||
no_flush = flush == Z_NO_FLUSH;
|
||||
|
||||
/* Trailing empty block. Switch to software, except when Continuation Flag
|
||||
* is set, which means that DFLTCC has buffered some output in the
|
||||
* parameter block and needs to be called again in order to flush it.
|
||||
*/
|
||||
if (flush == Z_FINISH && strm->avail_in == 0 && !param->cf) {
|
||||
if (param->bcf) {
|
||||
/* A block is still open, and the hardware does not support closing
|
||||
* blocks without adding data. Thus, close it manually.
|
||||
*/
|
||||
send_eobs(strm, param);
|
||||
param->bcf = 0;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (strm->avail_in == 0 && !param->cf) {
|
||||
*result = need_more;
|
||||
return 1;
|
||||
}
|
||||
|
||||
/* There is an open non-BFINAL block, we are not going to close it just
|
||||
* yet, we have compressed more than DFLTCC_BLOCK_SIZE bytes and we see
|
||||
* more than DFLTCC_DHT_MIN_SAMPLE_SIZE bytes. Open a new block with a new
|
||||
* DHT in order to adapt to a possibly changed input data distribution.
|
||||
*/
|
||||
if (param->bcf && no_flush &&
|
||||
strm->total_in > dfltcc_state->block_threshold &&
|
||||
strm->avail_in >= dfltcc_state->dht_threshold) {
|
||||
if (param->cf) {
|
||||
/* We need to flush the DFLTCC buffer before writing the
|
||||
* End-of-block Symbol. Mask the input data and proceed as usual.
|
||||
*/
|
||||
masked_avail_in += strm->avail_in;
|
||||
strm->avail_in = 0;
|
||||
no_flush = 0;
|
||||
} else {
|
||||
/* DFLTCC buffer is empty, so we can manually write the
|
||||
* End-of-block Symbol right away.
|
||||
*/
|
||||
send_eobs(strm, param);
|
||||
param->bcf = 0;
|
||||
dfltcc_state->block_threshold =
|
||||
strm->total_in + dfltcc_state->block_size;
|
||||
if (strm->avail_out == 0) {
|
||||
*result = need_more;
|
||||
return 1;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/* The caller gave us too much data. Pass only one block worth of
|
||||
* uncompressed data to DFLTCC and mask the rest, so that on the next
|
||||
* iteration we start a new block.
|
||||
*/
|
||||
if (no_flush && strm->avail_in > dfltcc_state->block_size) {
|
||||
masked_avail_in += (strm->avail_in - dfltcc_state->block_size);
|
||||
strm->avail_in = dfltcc_state->block_size;
|
||||
}
|
||||
|
||||
/* When we have an open non-BFINAL deflate block and caller indicates that
|
||||
* the stream is ending, we need to close an open deflate block and open a
|
||||
* BFINAL one.
|
||||
*/
|
||||
need_empty_block = flush == Z_FINISH && param->bcf && !param->bhf;
|
||||
|
||||
/* Translate stream to parameter block */
|
||||
param->cvt = CVT_ADLER32;
|
||||
if (!no_flush)
|
||||
/* We need to close a block. Always do this in software - when there is
|
||||
* no input data, the hardware will not nohor BCC. */
|
||||
soft_bcc = 1;
|
||||
if (flush == Z_FINISH && !param->bcf)
|
||||
/* We are about to open a BFINAL block, set Block Header Final bit
|
||||
* until the stream ends.
|
||||
*/
|
||||
param->bhf = 1;
|
||||
/* DFLTCC-CMPR will write to next_out, so make sure that buffers with
|
||||
* higher precedence are empty.
|
||||
*/
|
||||
Assert(state->pending == 0, "There must be no pending bytes");
|
||||
Assert(state->bi_valid < 8, "There must be less than 8 pending bits");
|
||||
param->sbb = (unsigned int)state->bi_valid;
|
||||
if (param->sbb > 0)
|
||||
*strm->next_out = (Byte)state->bi_buf;
|
||||
if (param->hl)
|
||||
param->nt = 0; /* Honor history */
|
||||
param->cv = strm->adler;
|
||||
|
||||
/* When opening a block, choose a Huffman-Table Type */
|
||||
if (!param->bcf) {
|
||||
if (strm->total_in == 0 && dfltcc_state->block_threshold > 0) {
|
||||
param->htt = HTT_FIXED;
|
||||
}
|
||||
else {
|
||||
param->htt = HTT_DYNAMIC;
|
||||
dfltcc_gdht(strm);
|
||||
}
|
||||
}
|
||||
|
||||
/* Deflate */
|
||||
do {
|
||||
cc = dfltcc_cmpr(strm);
|
||||
if (strm->avail_in < 4096 && masked_avail_in > 0)
|
||||
/* We are about to call DFLTCC with a small input buffer, which is
|
||||
* inefficient. Since there is masked data, there will be at least
|
||||
* one more DFLTCC call, so skip the current one and make the next
|
||||
* one handle more data.
|
||||
*/
|
||||
break;
|
||||
} while (cc == DFLTCC_CC_AGAIN);
|
||||
|
||||
/* Translate parameter block to stream */
|
||||
strm->msg = oesc_msg(dfltcc_state->msg, param->oesc);
|
||||
state->bi_valid = param->sbb;
|
||||
if (state->bi_valid == 0)
|
||||
state->bi_buf = 0; /* Avoid accessing next_out */
|
||||
else
|
||||
state->bi_buf = *strm->next_out & ((1 << state->bi_valid) - 1);
|
||||
strm->adler = param->cv;
|
||||
|
||||
/* Unmask the input data */
|
||||
strm->avail_in += masked_avail_in;
|
||||
masked_avail_in = 0;
|
||||
|
||||
/* If we encounter an error, it means there is a bug in DFLTCC call */
|
||||
Assert(cc != DFLTCC_CC_OP2_CORRUPT || param->oesc == 0, "BUG");
|
||||
|
||||
/* Update Block-Continuation Flag. It will be used to check whether to call
|
||||
* GDHT the next time.
|
||||
*/
|
||||
if (cc == DFLTCC_CC_OK) {
|
||||
if (soft_bcc) {
|
||||
send_eobs(strm, param);
|
||||
param->bcf = 0;
|
||||
dfltcc_state->block_threshold =
|
||||
strm->total_in + dfltcc_state->block_size;
|
||||
} else
|
||||
param->bcf = 1;
|
||||
if (flush == Z_FINISH) {
|
||||
if (need_empty_block)
|
||||
/* Make the current deflate() call also close the stream */
|
||||
return 0;
|
||||
else {
|
||||
bi_windup(state);
|
||||
*result = finish_done;
|
||||
}
|
||||
} else {
|
||||
if (flush == Z_FULL_FLUSH)
|
||||
param->hl = 0; /* Clear history */
|
||||
*result = flush == Z_NO_FLUSH ? need_more : block_done;
|
||||
}
|
||||
} else {
|
||||
param->bcf = 1;
|
||||
*result = need_more;
|
||||
}
|
||||
if (strm->avail_in != 0 && strm->avail_out != 0)
|
||||
goto again; /* deflate() must use all input or all output */
|
||||
return 1;
|
||||
}
|
149
lib/zlib_dfltcc/dfltcc_inflate.c
Normal file
149
lib/zlib_dfltcc/dfltcc_inflate.c
Normal file
|
@ -0,0 +1,149 @@
|
|||
// SPDX-License-Identifier: Zlib
|
||||
|
||||
#include "../zlib_inflate/inflate.h"
|
||||
#include "dfltcc_util.h"
|
||||
#include "dfltcc.h"
|
||||
#include <asm/setup.h>
|
||||
#include <linux/zutil.h>
|
||||
|
||||
/*
|
||||
* Expand.
|
||||
*/
|
||||
int dfltcc_can_inflate(
|
||||
z_streamp strm
|
||||
)
|
||||
{
|
||||
struct inflate_state *state = (struct inflate_state *)strm->state;
|
||||
struct dfltcc_state *dfltcc_state = GET_DFLTCC_STATE(state);
|
||||
|
||||
/* Check for kernel dfltcc command line parameter */
|
||||
if (zlib_dfltcc_support == ZLIB_DFLTCC_DISABLED ||
|
||||
zlib_dfltcc_support == ZLIB_DFLTCC_DEFLATE_ONLY)
|
||||
return 0;
|
||||
|
||||
/* Unsupported compression settings */
|
||||
if (state->wbits != HB_BITS)
|
||||
return 0;
|
||||
|
||||
/* Unsupported hardware */
|
||||
return is_bit_set(dfltcc_state->af.fns, DFLTCC_XPND) &&
|
||||
is_bit_set(dfltcc_state->af.fmts, DFLTCC_FMT0);
|
||||
}
|
||||
|
||||
static int dfltcc_was_inflate_used(
|
||||
z_streamp strm
|
||||
)
|
||||
{
|
||||
struct inflate_state *state = (struct inflate_state *)strm->state;
|
||||
struct dfltcc_param_v0 *param = &GET_DFLTCC_STATE(state)->param;
|
||||
|
||||
return !param->nt;
|
||||
}
|
||||
|
||||
static int dfltcc_inflate_disable(
|
||||
z_streamp strm
|
||||
)
|
||||
{
|
||||
struct inflate_state *state = (struct inflate_state *)strm->state;
|
||||
struct dfltcc_state *dfltcc_state = GET_DFLTCC_STATE(state);
|
||||
|
||||
if (!dfltcc_can_inflate(strm))
|
||||
return 0;
|
||||
if (dfltcc_was_inflate_used(strm))
|
||||
/* DFLTCC has already decompressed some data. Since there is not
|
||||
* enough information to resume decompression in software, the call
|
||||
* must fail.
|
||||
*/
|
||||
return 1;
|
||||
/* DFLTCC was not used yet - decompress in software */
|
||||
memset(&dfltcc_state->af, 0, sizeof(dfltcc_state->af));
|
||||
return 0;
|
||||
}
|
||||
|
||||
static dfltcc_cc dfltcc_xpnd(
|
||||
z_streamp strm
|
||||
)
|
||||
{
|
||||
struct inflate_state *state = (struct inflate_state *)strm->state;
|
||||
struct dfltcc_param_v0 *param = &GET_DFLTCC_STATE(state)->param;
|
||||
size_t avail_in = strm->avail_in;
|
||||
size_t avail_out = strm->avail_out;
|
||||
dfltcc_cc cc;
|
||||
|
||||
cc = dfltcc(DFLTCC_XPND | HBT_CIRCULAR,
|
||||
param, &strm->next_out, &avail_out,
|
||||
&strm->next_in, &avail_in, state->window);
|
||||
strm->avail_in = avail_in;
|
||||
strm->avail_out = avail_out;
|
||||
return cc;
|
||||
}
|
||||
|
||||
dfltcc_inflate_action dfltcc_inflate(
|
||||
z_streamp strm,
|
||||
int flush,
|
||||
int *ret
|
||||
)
|
||||
{
|
||||
struct inflate_state *state = (struct inflate_state *)strm->state;
|
||||
struct dfltcc_state *dfltcc_state = GET_DFLTCC_STATE(state);
|
||||
struct dfltcc_param_v0 *param = &dfltcc_state->param;
|
||||
dfltcc_cc cc;
|
||||
|
||||
if (flush == Z_BLOCK) {
|
||||
/* DFLTCC does not support stopping on block boundaries */
|
||||
if (dfltcc_inflate_disable(strm)) {
|
||||
*ret = Z_STREAM_ERROR;
|
||||
return DFLTCC_INFLATE_BREAK;
|
||||
} else
|
||||
return DFLTCC_INFLATE_SOFTWARE;
|
||||
}
|
||||
|
||||
if (state->last) {
|
||||
if (state->bits != 0) {
|
||||
strm->next_in++;
|
||||
strm->avail_in--;
|
||||
state->bits = 0;
|
||||
}
|
||||
state->mode = CHECK;
|
||||
return DFLTCC_INFLATE_CONTINUE;
|
||||
}
|
||||
|
||||
if (strm->avail_in == 0 && !param->cf)
|
||||
return DFLTCC_INFLATE_BREAK;
|
||||
|
||||
if (!state->window || state->wsize == 0) {
|
||||
state->mode = MEM;
|
||||
return DFLTCC_INFLATE_CONTINUE;
|
||||
}
|
||||
|
||||
/* Translate stream to parameter block */
|
||||
param->cvt = CVT_ADLER32;
|
||||
param->sbb = state->bits;
|
||||
param->hl = state->whave; /* Software and hardware history formats match */
|
||||
param->ho = (state->write - state->whave) & ((1 << HB_BITS) - 1);
|
||||
if (param->hl)
|
||||
param->nt = 0; /* Honor history for the first block */
|
||||
param->cv = state->flags ? REVERSE(state->check) : state->check;
|
||||
|
||||
/* Inflate */
|
||||
do {
|
||||
cc = dfltcc_xpnd(strm);
|
||||
} while (cc == DFLTCC_CC_AGAIN);
|
||||
|
||||
/* Translate parameter block to stream */
|
||||
strm->msg = oesc_msg(dfltcc_state->msg, param->oesc);
|
||||
state->last = cc == DFLTCC_CC_OK;
|
||||
state->bits = param->sbb;
|
||||
state->whave = param->hl;
|
||||
state->write = (param->ho + param->hl) & ((1 << HB_BITS) - 1);
|
||||
state->check = state->flags ? REVERSE(param->cv) : param->cv;
|
||||
if (cc == DFLTCC_CC_OP2_CORRUPT && param->oesc != 0) {
|
||||
/* Report an error if stream is corrupted */
|
||||
state->mode = BAD;
|
||||
return DFLTCC_INFLATE_CONTINUE;
|
||||
}
|
||||
state->mode = TYPEDO;
|
||||
/* Break if operands are exhausted, otherwise continue looping */
|
||||
return (cc == DFLTCC_CC_OP1_TOO_SHORT || cc == DFLTCC_CC_OP2_TOO_SHORT) ?
|
||||
DFLTCC_INFLATE_BREAK : DFLTCC_INFLATE_CONTINUE;
|
||||
}
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user