Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Minor conflict in mlx5 because changes happened to code that has
moved meanwhile.

Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
David S. Miller 2020-01-26 10:40:21 +01:00
commit 4d8773b68e
191 changed files with 2630 additions and 1090 deletions

View File

@ -25,10 +25,6 @@ good performance with large indices. If your index can be larger than
``ULONG_MAX`` then the XArray is not the data type for you. The most
important user of the XArray is the page cache.
Each non-``NULL`` entry in the array has three bits associated with
it called marks. Each mark may be set or cleared independently of
the others. You can iterate over entries which are marked.
Normal pointers may be stored in the XArray directly. They must be 4-byte
aligned, which is true for any pointer returned from kmalloc() and
alloc_page(). It isn't true for arbitrary user-space pointers,
@ -41,12 +37,11 @@ When you retrieve an entry from the XArray, you can check whether it is
a value entry by calling xa_is_value(), and convert it back to
an integer by calling xa_to_value().
Some users want to store tagged pointers instead of using the marks
described above. They can call xa_tag_pointer() to create an
entry with a tag, xa_untag_pointer() to turn a tagged entry
back into an untagged pointer and xa_pointer_tag() to retrieve
the tag of an entry. Tagged pointers use the same bits that are used
to distinguish value entries from normal pointers, so each user must
Some users want to tag the pointers they store in the XArray. You can
call xa_tag_pointer() to create an entry with a tag, xa_untag_pointer()
to turn a tagged entry back into an untagged pointer and xa_pointer_tag()
to retrieve the tag of an entry. Tagged pointers use the same bits that
are used to distinguish value entries from normal pointers, so you must
decide whether they want to store value entries or tagged pointers in
any particular XArray.
@ -56,10 +51,9 @@ conflict with value entries or internal entries.
An unusual feature of the XArray is the ability to create entries which
occupy a range of indices. Once stored to, looking up any index in
the range will return the same entry as looking up any other index in
the range. Setting a mark on one index will set it on all of them.
Storing to any index will store to all of them. Multi-index entries can
be explicitly split into smaller entries, or storing ``NULL`` into any
entry will cause the XArray to forget about the range.
the range. Storing to any index will store to all of them. Multi-index
entries can be explicitly split into smaller entries, or storing ``NULL``
into any entry will cause the XArray to forget about the range.
Normal API
==========
@ -87,17 +81,11 @@ If you want to only store a new entry to an index if the current entry
at that index is ``NULL``, you can use xa_insert() which
returns ``-EBUSY`` if the entry is not empty.
You can enquire whether a mark is set on an entry by using
xa_get_mark(). If the entry is not ``NULL``, you can set a mark
on it by using xa_set_mark() and remove the mark from an entry by
calling xa_clear_mark(). You can ask whether any entry in the
XArray has a particular mark set by calling xa_marked().
You can copy entries out of the XArray into a plain array by calling
xa_extract(). Or you can iterate over the present entries in
the XArray by calling xa_for_each(). You may prefer to use
xa_find() or xa_find_after() to move to the next present
entry in the XArray.
xa_extract(). Or you can iterate over the present entries in the XArray
by calling xa_for_each(), xa_for_each_start() or xa_for_each_range().
You may prefer to use xa_find() or xa_find_after() to move to the next
present entry in the XArray.
Calling xa_store_range() stores the same entry in a range
of indices. If you do this, some of the other operations will behave
@ -124,6 +112,31 @@ xa_destroy(). If the XArray entries are pointers, you may wish
to free the entries first. You can do this by iterating over all present
entries in the XArray using the xa_for_each() iterator.
Search Marks
------------
Each entry in the array has three bits associated with it called marks.
Each mark may be set or cleared independently of the others. You can
iterate over marked entries by using the xa_for_each_marked() iterator.
You can enquire whether a mark is set on an entry by using
xa_get_mark(). If the entry is not ``NULL``, you can set a mark on it
by using xa_set_mark() and remove the mark from an entry by calling
xa_clear_mark(). You can ask whether any entry in the XArray has a
particular mark set by calling xa_marked(). Erasing an entry from the
XArray causes all marks associated with that entry to be cleared.
Setting or clearing a mark on any index of a multi-index entry will
affect all indices covered by that entry. Querying the mark on any
index will return the same result.
There is no way to iterate over entries which are not marked; the data
structure does not allow this to be implemented efficiently. There are
not currently iterators to search for logical combinations of bits (eg
iterate over all entries which have both ``XA_MARK_1`` and ``XA_MARK_2``
set, or iterate over all entries which have ``XA_MARK_0`` or ``XA_MARK_2``
set). It would be possible to add these if a user arises.
Allocating XArrays
------------------
@ -180,6 +193,8 @@ No lock needed:
Takes RCU read lock:
* xa_load()
* xa_for_each()
* xa_for_each_start()
* xa_for_each_range()
* xa_find()
* xa_find_after()
* xa_extract()
@ -419,10 +434,9 @@ you last processed. If you have interrupts disabled while iterating,
then it is good manners to pause the iteration and reenable interrupts
every ``XA_CHECK_SCHED`` entries.
The xas_get_mark(), xas_set_mark() and
xas_clear_mark() functions require the xa_state cursor to have
been moved to the appropriate location in the xarray; they will do
nothing if you have called xas_pause() or xas_set()
The xas_get_mark(), xas_set_mark() and xas_clear_mark() functions require
the xa_state cursor to have been moved to the appropriate location in the
XArray; they will do nothing if you have called xas_pause() or xas_set()
immediately before.
You can call xas_set_update() to have a callback function

View File

@ -403,6 +403,19 @@ PROPERTIES
The settings and programming routines for internal/external
MDIO are different. Must be included for internal MDIO.
- fsl,erratum-a011043
Usage: optional
Value type: <boolean>
Definition: Indicates the presence of the A011043 erratum
describing that the MDIO_CFG[MDIO_RD_ER] bit may be falsely
set when reading internal PCS registers. MDIO reads to
internal PCS registers may result in having the
MDIO_CFG[MDIO_RD_ER] bit set, even when there is no error and
read data (MDIO_DATA[MDIO_DATA]) is correct.
Software may get false read error when reading internal
PCS registers through MDIO. As a workaround, all internal
MDIO accesses should ignore the MDIO_CFG[MDIO_RD_ER] bit.
For internal PHY device on internal mdio bus, a PHY node should be created.
See the definition of the PHY node in booting-without-of.txt for an
example of how to define a PHY (Internal PHY has no interrupt line).

View File

@ -6198,6 +6198,7 @@ ETHERNET PHY LIBRARY
M: Andrew Lunn <andrew@lunn.ch>
M: Florian Fainelli <f.fainelli@gmail.com>
M: Heiner Kallweit <hkallweit1@gmail.com>
R: Russell King <linux@armlinux.org.uk>
L: netdev@vger.kernel.org
S: Maintained
F: Documentation/ABI/testing/sysfs-class-net-phydev
@ -8570,7 +8571,7 @@ S: Maintained
F: drivers/platform/x86/intel-vbtn.c
INTEL WIRELESS 3945ABG/BG, 4965AGN (iwlegacy)
M: Stanislaw Gruszka <sgruszka@redhat.com>
M: Stanislaw Gruszka <stf_xl@wp.pl>
L: linux-wireless@vger.kernel.org
S: Supported
F: drivers/net/wireless/intel/iwlegacy/
@ -11500,6 +11501,7 @@ F: drivers/net/dsa/
NETWORKING [GENERAL]
M: "David S. Miller" <davem@davemloft.net>
M: Jakub Kicinski <kuba@kernel.org>
L: netdev@vger.kernel.org
W: http://www.linuxfoundation.org/en/Net
Q: http://patchwork.ozlabs.org/project/netdev/list/
@ -13840,7 +13842,7 @@ S: Maintained
F: arch/mips/ralink
RALINK RT2X00 WIRELESS LAN DRIVER
M: Stanislaw Gruszka <sgruszka@redhat.com>
M: Stanislaw Gruszka <stf_xl@wp.pl>
M: Helmut Schaa <helmut.schaa@googlemail.com>
L: linux-wireless@vger.kernel.org
S: Maintained
@ -16622,7 +16624,7 @@ F: kernel/time/ntp.c
F: tools/testing/selftests/timers/
TIPC NETWORK LAYER
M: Jon Maloy <jon.maloy@ericsson.com>
M: Jon Maloy <jmaloy@redhat.com>
M: Ying Xue <ying.xue@windriver.com>
L: netdev@vger.kernel.org (core kernel code)
L: tipc-discussion@lists.sourceforge.net (user apps, general discussion)

View File

@ -2,7 +2,7 @@
VERSION = 5
PATCHLEVEL = 5
SUBLEVEL = 0
EXTRAVERSION = -rc6
EXTRAVERSION = -rc7
NAME = Kleptomaniac Octopus
# *DOCUMENTATION*

View File

@ -131,6 +131,11 @@ &mcasp0 {
};
/ {
memory@80000000 {
device_type = "memory";
reg = <0x80000000 0x20000000>; /* 512 MB */
};
clk_mcasp0_fixed: clk_mcasp0_fixed {
#clock-cells = <0>;
compatible = "fixed-clock";

View File

@ -848,6 +848,7 @@ &spi0 {
pinctrl-names = "default", "sleep";
pinctrl-0 = <&spi0_pins_default>;
pinctrl-1 = <&spi0_pins_sleep>;
ti,pindir-d0-out-d1-in = <1>;
};
&spi1 {
@ -855,6 +856,7 @@ &spi1 {
pinctrl-names = "default", "sleep";
pinctrl-0 = <&spi1_pins_default>;
pinctrl-1 = <&spi1_pins_sleep>;
ti,pindir-d0-out-d1-in = <1>;
};
&usb2_phy1 {

View File

@ -146,10 +146,9 @@ ARM_BE8(orr r7, r7, #(1 << 25)) @ HSCTLR.EE
#if !defined(ZIMAGE) && defined(CONFIG_ARM_ARCH_TIMER)
@ make CNTP_* and CNTPCT accessible from PL1
mrc p15, 0, r7, c0, c1, 1 @ ID_PFR1
lsr r7, #16
and r7, #0xf
cmp r7, #1
bne 1f
ubfx r7, r7, #16, #4
teq r7, #0
beq 1f
mrc p15, 4, r7, c14, c1, 0 @ CNTHCTL
orr r7, r7, #3 @ PL1PCEN | PL1PCTEN
mcr p15, 4, r7, c14, c1, 0 @ CNTHCTL

View File

@ -455,11 +455,7 @@ config PPC_TRANSACTIONAL_MEM
config PPC_UV
bool "Ultravisor support"
depends on KVM_BOOK3S_HV_POSSIBLE
select ZONE_DEVICE
select DEV_PAGEMAP_OPS
select DEVICE_PRIVATE
select MEMORY_HOTPLUG
select MEMORY_HOTREMOVE
depends on DEVICE_PRIVATE
default n
help
This option paravirtualizes the kernel to run in POWER platforms that

View File

@ -63,6 +63,7 @@ mdio@e1000 {
#size-cells = <0>;
compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio";
reg = <0xe1000 0x1000>;
fsl,erratum-a011043; /* must ignore read errors */
pcsphy0: ethernet-phy@0 {
reg = <0x0>;

View File

@ -60,6 +60,7 @@ mdio@f1000 {
#size-cells = <0>;
compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio";
reg = <0xf1000 0x1000>;
fsl,erratum-a011043; /* must ignore read errors */
pcsphy6: ethernet-phy@0 {
reg = <0x0>;

View File

@ -63,6 +63,7 @@ mdio@e3000 {
#size-cells = <0>;
compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio";
reg = <0xe3000 0x1000>;
fsl,erratum-a011043; /* must ignore read errors */
pcsphy1: ethernet-phy@0 {
reg = <0x0>;

View File

@ -60,6 +60,7 @@ mdio@f3000 {
#size-cells = <0>;
compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio";
reg = <0xf3000 0x1000>;
fsl,erratum-a011043; /* must ignore read errors */
pcsphy7: ethernet-phy@0 {
reg = <0x0>;

View File

@ -59,6 +59,7 @@ mdio@e1000 {
#size-cells = <0>;
compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio";
reg = <0xe1000 0x1000>;
fsl,erratum-a011043; /* must ignore read errors */
pcsphy0: ethernet-phy@0 {
reg = <0x0>;

View File

@ -59,6 +59,7 @@ mdio@e3000 {
#size-cells = <0>;
compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio";
reg = <0xe3000 0x1000>;
fsl,erratum-a011043; /* must ignore read errors */
pcsphy1: ethernet-phy@0 {
reg = <0x0>;

View File

@ -59,6 +59,7 @@ mdio@e5000 {
#size-cells = <0>;
compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio";
reg = <0xe5000 0x1000>;
fsl,erratum-a011043; /* must ignore read errors */
pcsphy2: ethernet-phy@0 {
reg = <0x0>;

View File

@ -59,6 +59,7 @@ mdio@e7000 {
#size-cells = <0>;
compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio";
reg = <0xe7000 0x1000>;
fsl,erratum-a011043; /* must ignore read errors */
pcsphy3: ethernet-phy@0 {
reg = <0x0>;

View File

@ -59,6 +59,7 @@ mdio@e9000 {
#size-cells = <0>;
compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio";
reg = <0xe9000 0x1000>;
fsl,erratum-a011043; /* must ignore read errors */
pcsphy4: ethernet-phy@0 {
reg = <0x0>;

View File

@ -59,6 +59,7 @@ mdio@eb000 {
#size-cells = <0>;
compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio";
reg = <0xeb000 0x1000>;
fsl,erratum-a011043; /* must ignore read errors */
pcsphy5: ethernet-phy@0 {
reg = <0x0>;

View File

@ -60,6 +60,7 @@ mdio@f1000 {
#size-cells = <0>;
compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio";
reg = <0xf1000 0x1000>;
fsl,erratum-a011043; /* must ignore read errors */
pcsphy14: ethernet-phy@0 {
reg = <0x0>;

View File

@ -60,6 +60,7 @@ mdio@f3000 {
#size-cells = <0>;
compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio";
reg = <0xf3000 0x1000>;
fsl,erratum-a011043; /* must ignore read errors */
pcsphy15: ethernet-phy@0 {
reg = <0x0>;

View File

@ -59,6 +59,7 @@ mdio@e1000 {
#size-cells = <0>;
compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio";
reg = <0xe1000 0x1000>;
fsl,erratum-a011043; /* must ignore read errors */
pcsphy8: ethernet-phy@0 {
reg = <0x0>;

View File

@ -59,6 +59,7 @@ mdio@e3000 {
#size-cells = <0>;
compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio";
reg = <0xe3000 0x1000>;
fsl,erratum-a011043; /* must ignore read errors */
pcsphy9: ethernet-phy@0 {
reg = <0x0>;

View File

@ -59,6 +59,7 @@ mdio@e5000 {
#size-cells = <0>;
compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio";
reg = <0xe5000 0x1000>;
fsl,erratum-a011043; /* must ignore read errors */
pcsphy10: ethernet-phy@0 {
reg = <0x0>;

View File

@ -59,6 +59,7 @@ mdio@e7000 {
#size-cells = <0>;
compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio";
reg = <0xe7000 0x1000>;
fsl,erratum-a011043; /* must ignore read errors */
pcsphy11: ethernet-phy@0 {
reg = <0x0>;

View File

@ -59,6 +59,7 @@ mdio@e9000 {
#size-cells = <0>;
compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio";
reg = <0xe9000 0x1000>;
fsl,erratum-a011043; /* must ignore read errors */
pcsphy12: ethernet-phy@0 {
reg = <0x0>;

View File

@ -59,6 +59,7 @@ mdio@eb000 {
#size-cells = <0>;
compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio";
reg = <0xeb000 0x1000>;
fsl,erratum-a011043; /* must ignore read errors */
pcsphy13: ethernet-phy@0 {
reg = <0x0>;

View File

@ -600,8 +600,11 @@ extern void slb_set_size(u16 size);
*
*/
#define MAX_USER_CONTEXT ((ASM_CONST(1) << CONTEXT_BITS) - 2)
// The + 2 accounts for INVALID_REGION and 1 more to avoid overlap with kernel
#define MIN_USER_CONTEXT (MAX_KERNEL_CTX_CNT + MAX_VMALLOC_CTX_CNT + \
MAX_IO_CTX_CNT + MAX_VMEMMAP_CTX_CNT)
MAX_IO_CTX_CNT + MAX_VMEMMAP_CTX_CNT + 2)
/*
* For platforms that support on 65bit VA we limit the context bits
*/

View File

@ -39,6 +39,7 @@
#define XIVE_ESB_VAL_P 0x2
#define XIVE_ESB_VAL_Q 0x1
#define XIVE_ESB_INVALID 0xFF
/*
* Thread Management (aka "TM") registers

View File

@ -972,12 +972,21 @@ static int xive_get_irqchip_state(struct irq_data *data,
enum irqchip_irq_state which, bool *state)
{
struct xive_irq_data *xd = irq_data_get_irq_handler_data(data);
u8 pq;
switch (which) {
case IRQCHIP_STATE_ACTIVE:
*state = !xd->stale_p &&
(xd->saved_p ||
!!(xive_esb_read(xd, XIVE_ESB_GET) & XIVE_ESB_VAL_P));
pq = xive_esb_read(xd, XIVE_ESB_GET);
/*
* The esb value being all 1's means we couldn't get
* the PQ state of the interrupt through mmio. It may
* happen, for example when querying a PHB interrupt
* while the PHB is in an error state. We consider the
* interrupt to be inactive in that case.
*/
*state = (pq != XIVE_ESB_INVALID) && !xd->stale_p &&
(xd->saved_p || !!(pq & XIVE_ESB_VAL_P));
return 0;
default:
return -EINVAL;

View File

@ -912,6 +912,7 @@ static int fs_open(struct atm_vcc *atm_vcc)
}
if (!to) {
printk ("No more free channels for FS50..\n");
kfree(vcc);
return -EBUSY;
}
vcc->channo = dev->channo;
@ -922,6 +923,7 @@ static int fs_open(struct atm_vcc *atm_vcc)
if (((DO_DIRECTION(rxtp) && dev->atm_vccs[vcc->channo])) ||
( DO_DIRECTION(txtp) && test_bit (vcc->channo, dev->tx_inuse))) {
printk ("Channel is in use for FS155.\n");
kfree(vcc);
return -EBUSY;
}
}
@ -935,6 +937,7 @@ static int fs_open(struct atm_vcc *atm_vcc)
tc, sizeof (struct fs_transmit_config));
if (!tc) {
fs_dprintk (FS_DEBUG_OPEN, "fs: can't alloc transmit_config.\n");
kfree(vcc);
return -ENOMEM;
}

View File

@ -1004,7 +1004,7 @@ static const struct pci_device_id pciidlist[] = {
{0x1002, 0x734F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_NAVI14},
/* Renoir */
{0x1002, 0x1636, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RENOIR|AMD_IS_APU|AMD_EXP_HW_SUPPORT},
{0x1002, 0x1636, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RENOIR|AMD_IS_APU},
/* Navi12 */
{0x1002, 0x7360, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_NAVI12|AMD_EXP_HW_SUPPORT},

View File

@ -1916,73 +1916,90 @@ static u8 drm_dp_calculate_rad(struct drm_dp_mst_port *port,
return parent_lct + 1;
}
static int drm_dp_port_set_pdt(struct drm_dp_mst_port *port, u8 new_pdt)
static bool drm_dp_mst_is_dp_mst_end_device(u8 pdt, bool mcs)
{
switch (pdt) {
case DP_PEER_DEVICE_DP_LEGACY_CONV:
case DP_PEER_DEVICE_SST_SINK:
return true;
case DP_PEER_DEVICE_MST_BRANCHING:
/* For sst branch device */
if (!mcs)
return true;
return false;
}
return true;
}
static int
drm_dp_port_set_pdt(struct drm_dp_mst_port *port, u8 new_pdt,
bool new_mcs)
{
struct drm_dp_mst_topology_mgr *mgr = port->mgr;
struct drm_dp_mst_branch *mstb;
u8 rad[8], lct;
int ret = 0;
if (port->pdt == new_pdt)
if (port->pdt == new_pdt && port->mcs == new_mcs)
return 0;
/* Teardown the old pdt, if there is one */
switch (port->pdt) {
case DP_PEER_DEVICE_DP_LEGACY_CONV:
case DP_PEER_DEVICE_SST_SINK:
/*
* If the new PDT would also have an i2c bus, don't bother
* with reregistering it
*/
if (new_pdt == DP_PEER_DEVICE_DP_LEGACY_CONV ||
new_pdt == DP_PEER_DEVICE_SST_SINK) {
port->pdt = new_pdt;
return 0;
}
if (port->pdt != DP_PEER_DEVICE_NONE) {
if (drm_dp_mst_is_dp_mst_end_device(port->pdt, port->mcs)) {
/*
* If the new PDT would also have an i2c bus,
* don't bother with reregistering it
*/
if (new_pdt != DP_PEER_DEVICE_NONE &&
drm_dp_mst_is_dp_mst_end_device(new_pdt, new_mcs)) {
port->pdt = new_pdt;
port->mcs = new_mcs;
return 0;
}
/* remove i2c over sideband */
drm_dp_mst_unregister_i2c_bus(&port->aux);
break;
case DP_PEER_DEVICE_MST_BRANCHING:
mutex_lock(&mgr->lock);
drm_dp_mst_topology_put_mstb(port->mstb);
port->mstb = NULL;
mutex_unlock(&mgr->lock);
break;
/* remove i2c over sideband */
drm_dp_mst_unregister_i2c_bus(&port->aux);
} else {
mutex_lock(&mgr->lock);
drm_dp_mst_topology_put_mstb(port->mstb);
port->mstb = NULL;
mutex_unlock(&mgr->lock);
}
}
port->pdt = new_pdt;
switch (port->pdt) {
case DP_PEER_DEVICE_DP_LEGACY_CONV:
case DP_PEER_DEVICE_SST_SINK:
/* add i2c over sideband */
ret = drm_dp_mst_register_i2c_bus(&port->aux);
break;
port->mcs = new_mcs;
case DP_PEER_DEVICE_MST_BRANCHING:
lct = drm_dp_calculate_rad(port, rad);
mstb = drm_dp_add_mst_branch_device(lct, rad);
if (!mstb) {
ret = -ENOMEM;
DRM_ERROR("Failed to create MSTB for port %p", port);
goto out;
if (port->pdt != DP_PEER_DEVICE_NONE) {
if (drm_dp_mst_is_dp_mst_end_device(port->pdt, port->mcs)) {
/* add i2c over sideband */
ret = drm_dp_mst_register_i2c_bus(&port->aux);
} else {
lct = drm_dp_calculate_rad(port, rad);
mstb = drm_dp_add_mst_branch_device(lct, rad);
if (!mstb) {
ret = -ENOMEM;
DRM_ERROR("Failed to create MSTB for port %p",
port);
goto out;
}
mutex_lock(&mgr->lock);
port->mstb = mstb;
mstb->mgr = port->mgr;
mstb->port_parent = port;
/*
* Make sure this port's memory allocation stays
* around until its child MSTB releases it
*/
drm_dp_mst_get_port_malloc(port);
mutex_unlock(&mgr->lock);
/* And make sure we send a link address for this */
ret = 1;
}
mutex_lock(&mgr->lock);
port->mstb = mstb;
mstb->mgr = port->mgr;
mstb->port_parent = port;
/*
* Make sure this port's memory allocation stays
* around until its child MSTB releases it
*/
drm_dp_mst_get_port_malloc(port);
mutex_unlock(&mgr->lock);
/* And make sure we send a link address for this */
ret = 1;
break;
}
out:
@ -2135,9 +2152,8 @@ drm_dp_mst_port_add_connector(struct drm_dp_mst_branch *mstb,
goto error;
}
if ((port->pdt == DP_PEER_DEVICE_DP_LEGACY_CONV ||
port->pdt == DP_PEER_DEVICE_SST_SINK) &&
port->port_num >= DP_MST_LOGICAL_PORT_0) {
if (port->pdt != DP_PEER_DEVICE_NONE &&
drm_dp_mst_is_dp_mst_end_device(port->pdt, port->mcs)) {
port->cached_edid = drm_get_edid(port->connector,
&port->aux.ddc);
drm_connector_set_tile_property(port->connector);
@ -2201,6 +2217,7 @@ drm_dp_mst_handle_link_address_port(struct drm_dp_mst_branch *mstb,
struct drm_dp_mst_port *port;
int old_ddps = 0, ret;
u8 new_pdt = DP_PEER_DEVICE_NONE;
bool new_mcs = 0;
bool created = false, send_link_addr = false, changed = false;
port = drm_dp_get_port(mstb, port_msg->port_number);
@ -2245,7 +2262,7 @@ drm_dp_mst_handle_link_address_port(struct drm_dp_mst_branch *mstb,
port->input = port_msg->input_port;
if (!port->input)
new_pdt = port_msg->peer_device_type;
port->mcs = port_msg->mcs;
new_mcs = port_msg->mcs;
port->ddps = port_msg->ddps;
port->ldps = port_msg->legacy_device_plug_status;
port->dpcd_rev = port_msg->dpcd_revision;
@ -2272,7 +2289,7 @@ drm_dp_mst_handle_link_address_port(struct drm_dp_mst_branch *mstb,
}
}
ret = drm_dp_port_set_pdt(port, new_pdt);
ret = drm_dp_port_set_pdt(port, new_pdt, new_mcs);
if (ret == 1) {
send_link_addr = true;
} else if (ret < 0) {
@ -2286,7 +2303,8 @@ drm_dp_mst_handle_link_address_port(struct drm_dp_mst_branch *mstb,
* we're coming out of suspend. In this case, always resend the link
* address if there's an MSTB on this port
*/
if (!created && port->pdt == DP_PEER_DEVICE_MST_BRANCHING)
if (!created && port->pdt == DP_PEER_DEVICE_MST_BRANCHING &&
port->mcs)
send_link_addr = true;
if (port->connector)
@ -2323,6 +2341,7 @@ drm_dp_mst_handle_conn_stat(struct drm_dp_mst_branch *mstb,
struct drm_dp_mst_port *port;
int old_ddps, old_input, ret, i;
u8 new_pdt;
bool new_mcs;
bool dowork = false, create_connector = false;
port = drm_dp_get_port(mstb, conn_stat->port_number);
@ -2354,7 +2373,6 @@ drm_dp_mst_handle_conn_stat(struct drm_dp_mst_branch *mstb,
old_ddps = port->ddps;
old_input = port->input;
port->input = conn_stat->input_port;
port->mcs = conn_stat->message_capability_status;
port->ldps = conn_stat->legacy_device_plug_status;
port->ddps = conn_stat->displayport_device_plug_status;
@ -2367,8 +2385,8 @@ drm_dp_mst_handle_conn_stat(struct drm_dp_mst_branch *mstb,
}
new_pdt = port->input ? DP_PEER_DEVICE_NONE : conn_stat->peer_device_type;
ret = drm_dp_port_set_pdt(port, new_pdt);
new_mcs = conn_stat->message_capability_status;
ret = drm_dp_port_set_pdt(port, new_pdt, new_mcs);
if (ret == 1) {
dowork = true;
} else if (ret < 0) {
@ -3929,6 +3947,8 @@ drm_dp_mst_detect_port(struct drm_connector *connector,
switch (port->pdt) {
case DP_PEER_DEVICE_NONE:
case DP_PEER_DEVICE_MST_BRANCHING:
if (!port->mcs)
ret = connector_status_connected;
break;
case DP_PEER_DEVICE_SST_SINK:
@ -4541,7 +4561,7 @@ drm_dp_delayed_destroy_port(struct drm_dp_mst_port *port)
if (port->connector)
port->mgr->cbs->destroy_connector(port->mgr, port->connector);
drm_dp_port_set_pdt(port, DP_PEER_DEVICE_NONE);
drm_dp_port_set_pdt(port, DP_PEER_DEVICE_NONE, port->mcs);
drm_dp_mst_put_port_malloc(port);
}

View File

@ -9,16 +9,16 @@
#include "i915_gem_ioctls.h"
#include "i915_gem_object.h"
static __always_inline u32 __busy_read_flag(u8 id)
static __always_inline u32 __busy_read_flag(u16 id)
{
if (id == (u8)I915_ENGINE_CLASS_INVALID)
if (id == (u16)I915_ENGINE_CLASS_INVALID)
return 0xffff0000u;
GEM_BUG_ON(id >= 16);
return 0x10000u << id;
}
static __always_inline u32 __busy_write_id(u8 id)
static __always_inline u32 __busy_write_id(u16 id)
{
/*
* The uABI guarantees an active writer is also amongst the read
@ -29,14 +29,14 @@ static __always_inline u32 __busy_write_id(u8 id)
* last_read - hence we always set both read and write busy for
* last_write.
*/
if (id == (u8)I915_ENGINE_CLASS_INVALID)
if (id == (u16)I915_ENGINE_CLASS_INVALID)
return 0xffffffffu;
return (id + 1) | __busy_read_flag(id);
}
static __always_inline unsigned int
__busy_set_if_active(const struct dma_fence *fence, u32 (*flag)(u8 id))
__busy_set_if_active(const struct dma_fence *fence, u32 (*flag)(u16 id))
{
const struct i915_request *rq;
@ -57,7 +57,7 @@ __busy_set_if_active(const struct dma_fence *fence, u32 (*flag)(u8 id))
return 0;
/* Beware type-expansion follies! */
BUILD_BUG_ON(!typecheck(u8, rq->engine->uabi_class));
BUILD_BUG_ON(!typecheck(u16, rq->engine->uabi_class));
return flag(rq->engine->uabi_class);
}

View File

@ -402,7 +402,7 @@ struct get_pages_work {
static struct sg_table *
__i915_gem_userptr_alloc_pages(struct drm_i915_gem_object *obj,
struct page **pvec, int num_pages)
struct page **pvec, unsigned long num_pages)
{
unsigned int max_segment = i915_sg_segment_size();
struct sg_table *st;
@ -448,9 +448,10 @@ __i915_gem_userptr_get_pages_worker(struct work_struct *_work)
{
struct get_pages_work *work = container_of(_work, typeof(*work), work);
struct drm_i915_gem_object *obj = work->obj;
const int npages = obj->base.size >> PAGE_SHIFT;
const unsigned long npages = obj->base.size >> PAGE_SHIFT;
unsigned long pinned;
struct page **pvec;
int pinned, ret;
int ret;
ret = -ENOMEM;
pinned = 0;
@ -553,7 +554,7 @@ __i915_gem_userptr_get_pages_schedule(struct drm_i915_gem_object *obj)
static int i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj)
{
const int num_pages = obj->base.size >> PAGE_SHIFT;
const unsigned long num_pages = obj->base.size >> PAGE_SHIFT;
struct mm_struct *mm = obj->userptr.mm->mm;
struct page **pvec;
struct sg_table *pages;

View File

@ -274,8 +274,8 @@ struct intel_engine_cs {
u8 class;
u8 instance;
u8 uabi_class;
u8 uabi_instance;
u16 uabi_class;
u16 uabi_instance;
u32 uabi_capabilities;
u32 context_size;

View File

@ -1177,6 +1177,7 @@ gen8_ppgtt_insert_pte(struct i915_ppgtt *ppgtt,
pd = i915_pd_entry(pdp, gen8_pd_index(idx, 2));
vaddr = kmap_atomic_px(i915_pt_entry(pd, gen8_pd_index(idx, 1)));
do {
GEM_BUG_ON(iter->sg->length < I915_GTT_PAGE_SIZE);
vaddr[gen8_pd_index(idx, 0)] = pte_encode | iter->dma;
iter->dma += I915_GTT_PAGE_SIZE;
@ -1660,6 +1661,7 @@ static void gen6_ppgtt_insert_entries(struct i915_address_space *vm,
vaddr = kmap_atomic_px(i915_pt_entry(pd, act_pt));
do {
GEM_BUG_ON(iter.sg->length < I915_GTT_PAGE_SIZE);
vaddr[act_pte] = pte_encode | GEN6_PTE_ADDR_ENCODE(iter.dma);
iter.dma += I915_GTT_PAGE_SIZE;

View File

@ -78,8 +78,10 @@ static int panfrost_ioctl_get_param(struct drm_device *ddev, void *data, struct
static int panfrost_ioctl_create_bo(struct drm_device *dev, void *data,
struct drm_file *file)
{
struct panfrost_file_priv *priv = file->driver_priv;
struct panfrost_gem_object *bo;
struct drm_panfrost_create_bo *args = data;
struct panfrost_gem_mapping *mapping;
if (!args->size || args->pad ||
(args->flags & ~(PANFROST_BO_NOEXEC | PANFROST_BO_HEAP)))
@ -95,7 +97,14 @@ static int panfrost_ioctl_create_bo(struct drm_device *dev, void *data,
if (IS_ERR(bo))
return PTR_ERR(bo);
args->offset = bo->node.start << PAGE_SHIFT;
mapping = panfrost_gem_mapping_get(bo, priv);
if (!mapping) {
drm_gem_object_put_unlocked(&bo->base.base);
return -EINVAL;
}
args->offset = mapping->mmnode.start << PAGE_SHIFT;
panfrost_gem_mapping_put(mapping);
return 0;
}
@ -119,6 +128,11 @@ panfrost_lookup_bos(struct drm_device *dev,
struct drm_panfrost_submit *args,
struct panfrost_job *job)
{
struct panfrost_file_priv *priv = file_priv->driver_priv;
struct panfrost_gem_object *bo;
unsigned int i;
int ret;
job->bo_count = args->bo_handle_count;
if (!job->bo_count)
@ -130,9 +144,32 @@ panfrost_lookup_bos(struct drm_device *dev,
if (!job->implicit_fences)
return -ENOMEM;
return drm_gem_objects_lookup(file_priv,
(void __user *)(uintptr_t)args->bo_handles,
job->bo_count, &job->bos);
ret = drm_gem_objects_lookup(file_priv,
(void __user *)(uintptr_t)args->bo_handles,
job->bo_count, &job->bos);
if (ret)
return ret;
job->mappings = kvmalloc_array(job->bo_count,
sizeof(struct panfrost_gem_mapping *),
GFP_KERNEL | __GFP_ZERO);
if (!job->mappings)
return -ENOMEM;
for (i = 0; i < job->bo_count; i++) {
struct panfrost_gem_mapping *mapping;
bo = to_panfrost_bo(job->bos[i]);
mapping = panfrost_gem_mapping_get(bo, priv);
if (!mapping) {
ret = -EINVAL;
break;
}
job->mappings[i] = mapping;
}
return ret;
}
/**
@ -320,7 +357,9 @@ static int panfrost_ioctl_mmap_bo(struct drm_device *dev, void *data,
static int panfrost_ioctl_get_bo_offset(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
struct panfrost_file_priv *priv = file_priv->driver_priv;
struct drm_panfrost_get_bo_offset *args = data;
struct panfrost_gem_mapping *mapping;
struct drm_gem_object *gem_obj;
struct panfrost_gem_object *bo;
@ -331,18 +370,26 @@ static int panfrost_ioctl_get_bo_offset(struct drm_device *dev, void *data,
}
bo = to_panfrost_bo(gem_obj);
args->offset = bo->node.start << PAGE_SHIFT;
mapping = panfrost_gem_mapping_get(bo, priv);
drm_gem_object_put_unlocked(gem_obj);
if (!mapping)
return -EINVAL;
args->offset = mapping->mmnode.start << PAGE_SHIFT;
panfrost_gem_mapping_put(mapping);
return 0;
}
static int panfrost_ioctl_madvise(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
struct panfrost_file_priv *priv = file_priv->driver_priv;
struct drm_panfrost_madvise *args = data;
struct panfrost_device *pfdev = dev->dev_private;
struct drm_gem_object *gem_obj;
struct panfrost_gem_object *bo;
int ret = 0;
gem_obj = drm_gem_object_lookup(file_priv, args->handle);
if (!gem_obj) {
@ -350,22 +397,48 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data,
return -ENOENT;
}
bo = to_panfrost_bo(gem_obj);
mutex_lock(&pfdev->shrinker_lock);
mutex_lock(&bo->mappings.lock);
if (args->madv == PANFROST_MADV_DONTNEED) {
struct panfrost_gem_mapping *first;
first = list_first_entry(&bo->mappings.list,
struct panfrost_gem_mapping,
node);
/*
* If we want to mark the BO purgeable, there must be only one
* user: the caller FD.
* We could do something smarter and mark the BO purgeable only
* when all its users have marked it purgeable, but globally
* visible/shared BOs are likely to never be marked purgeable
* anyway, so let's not bother.
*/
if (!list_is_singular(&bo->mappings.list) ||
WARN_ON_ONCE(first->mmu != &priv->mmu)) {
ret = -EINVAL;
goto out_unlock_mappings;
}
}
args->retained = drm_gem_shmem_madvise(gem_obj, args->madv);
if (args->retained) {
struct panfrost_gem_object *bo = to_panfrost_bo(gem_obj);
if (args->madv == PANFROST_MADV_DONTNEED)
list_add_tail(&bo->base.madv_list,
&pfdev->shrinker_list);
else if (args->madv == PANFROST_MADV_WILLNEED)
list_del_init(&bo->base.madv_list);
}
out_unlock_mappings:
mutex_unlock(&bo->mappings.lock);
mutex_unlock(&pfdev->shrinker_lock);
drm_gem_object_put_unlocked(gem_obj);
return 0;
return ret;
}
int panfrost_unstable_ioctl_check(void)

View File

@ -29,6 +29,12 @@ static void panfrost_gem_free_object(struct drm_gem_object *obj)
list_del_init(&bo->base.madv_list);
mutex_unlock(&pfdev->shrinker_lock);
/*
* If we still have mappings attached to the BO, there's a problem in
* our refcounting.
*/
WARN_ON_ONCE(!list_empty(&bo->mappings.list));
if (bo->sgts) {
int i;
int n_sgt = bo->base.base.size / SZ_2M;
@ -46,6 +52,69 @@ static void panfrost_gem_free_object(struct drm_gem_object *obj)
drm_gem_shmem_free_object(obj);
}
struct panfrost_gem_mapping *
panfrost_gem_mapping_get(struct panfrost_gem_object *bo,
struct panfrost_file_priv *priv)
{
struct panfrost_gem_mapping *iter, *mapping = NULL;
mutex_lock(&bo->mappings.lock);
list_for_each_entry(iter, &bo->mappings.list, node) {
if (iter->mmu == &priv->mmu) {
kref_get(&iter->refcount);
mapping = iter;
break;
}
}
mutex_unlock(&bo->mappings.lock);
return mapping;
}
static void
panfrost_gem_teardown_mapping(struct panfrost_gem_mapping *mapping)
{
struct panfrost_file_priv *priv;
if (mapping->active)
panfrost_mmu_unmap(mapping);
priv = container_of(mapping->mmu, struct panfrost_file_priv, mmu);
spin_lock(&priv->mm_lock);
if (drm_mm_node_allocated(&mapping->mmnode))
drm_mm_remove_node(&mapping->mmnode);
spin_unlock(&priv->mm_lock);
}
static void panfrost_gem_mapping_release(struct kref *kref)
{
struct panfrost_gem_mapping *mapping;
mapping = container_of(kref, struct panfrost_gem_mapping, refcount);
panfrost_gem_teardown_mapping(mapping);
drm_gem_object_put_unlocked(&mapping->obj->base.base);
kfree(mapping);
}
void panfrost_gem_mapping_put(struct panfrost_gem_mapping *mapping)
{
if (!mapping)
return;
kref_put(&mapping->refcount, panfrost_gem_mapping_release);
}
void panfrost_gem_teardown_mappings(struct panfrost_gem_object *bo)
{
struct panfrost_gem_mapping *mapping;
mutex_lock(&bo->mappings.lock);
list_for_each_entry(mapping, &bo->mappings.list, node)
panfrost_gem_teardown_mapping(mapping);
mutex_unlock(&bo->mappings.lock);
}
int panfrost_gem_open(struct drm_gem_object *obj, struct drm_file *file_priv)
{
int ret;
@ -54,6 +123,16 @@ int panfrost_gem_open(struct drm_gem_object *obj, struct drm_file *file_priv)
struct panfrost_gem_object *bo = to_panfrost_bo(obj);
unsigned long color = bo->noexec ? PANFROST_BO_NOEXEC : 0;
struct panfrost_file_priv *priv = file_priv->driver_priv;
struct panfrost_gem_mapping *mapping;
mapping = kzalloc(sizeof(*mapping), GFP_KERNEL);
if (!mapping)
return -ENOMEM;
INIT_LIST_HEAD(&mapping->node);
kref_init(&mapping->refcount);
drm_gem_object_get(obj);
mapping->obj = bo;
/*
* Executable buffers cannot cross a 16MB boundary as the program
@ -66,37 +145,48 @@ int panfrost_gem_open(struct drm_gem_object *obj, struct drm_file *file_priv)
else
align = size >= SZ_2M ? SZ_2M >> PAGE_SHIFT : 0;
bo->mmu = &priv->mmu;
mapping->mmu = &priv->mmu;
spin_lock(&priv->mm_lock);
ret = drm_mm_insert_node_generic(&priv->mm, &bo->node,
ret = drm_mm_insert_node_generic(&priv->mm, &mapping->mmnode,
size >> PAGE_SHIFT, align, color, 0);
spin_unlock(&priv->mm_lock);
if (ret)
return ret;
goto err;
if (!bo->is_heap) {
ret = panfrost_mmu_map(bo);
if (ret) {
spin_lock(&priv->mm_lock);
drm_mm_remove_node(&bo->node);
spin_unlock(&priv->mm_lock);
}
ret = panfrost_mmu_map(mapping);
if (ret)
goto err;
}
mutex_lock(&bo->mappings.lock);
WARN_ON(bo->base.madv != PANFROST_MADV_WILLNEED);
list_add_tail(&mapping->node, &bo->mappings.list);
mutex_unlock(&bo->mappings.lock);
err:
if (ret)
panfrost_gem_mapping_put(mapping);
return ret;
}
void panfrost_gem_close(struct drm_gem_object *obj, struct drm_file *file_priv)
{
struct panfrost_gem_object *bo = to_panfrost_bo(obj);
struct panfrost_file_priv *priv = file_priv->driver_priv;
struct panfrost_gem_object *bo = to_panfrost_bo(obj);
struct panfrost_gem_mapping *mapping = NULL, *iter;
if (bo->is_mapped)
panfrost_mmu_unmap(bo);
mutex_lock(&bo->mappings.lock);
list_for_each_entry(iter, &bo->mappings.list, node) {
if (iter->mmu == &priv->mmu) {
mapping = iter;
list_del(&iter->node);
break;
}
}
mutex_unlock(&bo->mappings.lock);
spin_lock(&priv->mm_lock);
if (drm_mm_node_allocated(&bo->node))
drm_mm_remove_node(&bo->node);
spin_unlock(&priv->mm_lock);
panfrost_gem_mapping_put(mapping);
}
static int panfrost_gem_pin(struct drm_gem_object *obj)
@ -136,6 +226,8 @@ struct drm_gem_object *panfrost_gem_create_object(struct drm_device *dev, size_t
if (!obj)
return NULL;
INIT_LIST_HEAD(&obj->mappings.list);
mutex_init(&obj->mappings.lock);
obj->base.base.funcs = &panfrost_gem_funcs;
return &obj->base.base;

View File

@ -13,23 +13,46 @@ struct panfrost_gem_object {
struct drm_gem_shmem_object base;
struct sg_table *sgts;
struct panfrost_mmu *mmu;
struct drm_mm_node node;
bool is_mapped :1;
/*
* Use a list for now. If searching a mapping ever becomes the
* bottleneck, we should consider using an RB-tree, or even better,
* let the core store drm_gem_object_mapping entries (where we
* could place driver specific data) instead of drm_gem_object ones
* in its drm_file->object_idr table.
*
* struct drm_gem_object_mapping {
* struct drm_gem_object *obj;
* void *driver_priv;
* };
*/
struct {
struct list_head list;
struct mutex lock;
} mappings;
bool noexec :1;
bool is_heap :1;
};
struct panfrost_gem_mapping {
struct list_head node;
struct kref refcount;
struct panfrost_gem_object *obj;
struct drm_mm_node mmnode;
struct panfrost_mmu *mmu;
bool active :1;
};
static inline
struct panfrost_gem_object *to_panfrost_bo(struct drm_gem_object *obj)
{
return container_of(to_drm_gem_shmem_obj(obj), struct panfrost_gem_object, base);
}
static inline
struct panfrost_gem_object *drm_mm_node_to_panfrost_bo(struct drm_mm_node *node)
static inline struct panfrost_gem_mapping *
drm_mm_node_to_panfrost_mapping(struct drm_mm_node *node)
{
return container_of(node, struct panfrost_gem_object, node);
return container_of(node, struct panfrost_gem_mapping, mmnode);
}
struct drm_gem_object *panfrost_gem_create_object(struct drm_device *dev, size_t size);
@ -49,6 +72,12 @@ int panfrost_gem_open(struct drm_gem_object *obj, struct drm_file *file_priv);
void panfrost_gem_close(struct drm_gem_object *obj,
struct drm_file *file_priv);
struct panfrost_gem_mapping *
panfrost_gem_mapping_get(struct panfrost_gem_object *bo,
struct panfrost_file_priv *priv);
void panfrost_gem_mapping_put(struct panfrost_gem_mapping *mapping);
void panfrost_gem_teardown_mappings(struct panfrost_gem_object *bo);
void panfrost_gem_shrinker_init(struct drm_device *dev);
void panfrost_gem_shrinker_cleanup(struct drm_device *dev);

View File

@ -39,11 +39,12 @@ panfrost_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc
static bool panfrost_gem_purge(struct drm_gem_object *obj)
{
struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
struct panfrost_gem_object *bo = to_panfrost_bo(obj);
if (!mutex_trylock(&shmem->pages_lock))
return false;
panfrost_mmu_unmap(to_panfrost_bo(obj));
panfrost_gem_teardown_mappings(bo);
drm_gem_shmem_purge_locked(obj);
mutex_unlock(&shmem->pages_lock);

View File

@ -268,9 +268,20 @@ static void panfrost_job_cleanup(struct kref *ref)
dma_fence_put(job->done_fence);
dma_fence_put(job->render_done_fence);
if (job->bos) {
if (job->mappings) {
for (i = 0; i < job->bo_count; i++)
panfrost_gem_mapping_put(job->mappings[i]);
kvfree(job->mappings);
}
if (job->bos) {
struct panfrost_gem_object *bo;
for (i = 0; i < job->bo_count; i++) {
bo = to_panfrost_bo(job->bos[i]);
drm_gem_object_put_unlocked(job->bos[i]);
}
kvfree(job->bos);
}

View File

@ -32,6 +32,7 @@ struct panfrost_job {
/* Exclusive fences we have taken from the BOs to wait for */
struct dma_fence **implicit_fences;
struct panfrost_gem_mapping **mappings;
struct drm_gem_object **bos;
u32 bo_count;

View File

@ -269,14 +269,15 @@ static int mmu_map_sg(struct panfrost_device *pfdev, struct panfrost_mmu *mmu,
return 0;
}
int panfrost_mmu_map(struct panfrost_gem_object *bo)
int panfrost_mmu_map(struct panfrost_gem_mapping *mapping)
{
struct panfrost_gem_object *bo = mapping->obj;
struct drm_gem_object *obj = &bo->base.base;
struct panfrost_device *pfdev = to_panfrost_device(obj->dev);
struct sg_table *sgt;
int prot = IOMMU_READ | IOMMU_WRITE;
if (WARN_ON(bo->is_mapped))
if (WARN_ON(mapping->active))
return 0;
if (bo->noexec)
@ -286,25 +287,28 @@ int panfrost_mmu_map(struct panfrost_gem_object *bo)
if (WARN_ON(IS_ERR(sgt)))
return PTR_ERR(sgt);
mmu_map_sg(pfdev, bo->mmu, bo->node.start << PAGE_SHIFT, prot, sgt);
bo->is_mapped = true;
mmu_map_sg(pfdev, mapping->mmu, mapping->mmnode.start << PAGE_SHIFT,
prot, sgt);
mapping->active = true;
return 0;
}
void panfrost_mmu_unmap(struct panfrost_gem_object *bo)
void panfrost_mmu_unmap(struct panfrost_gem_mapping *mapping)
{
struct panfrost_gem_object *bo = mapping->obj;
struct drm_gem_object *obj = &bo->base.base;
struct panfrost_device *pfdev = to_panfrost_device(obj->dev);
struct io_pgtable_ops *ops = bo->mmu->pgtbl_ops;
u64 iova = bo->node.start << PAGE_SHIFT;
size_t len = bo->node.size << PAGE_SHIFT;
struct io_pgtable_ops *ops = mapping->mmu->pgtbl_ops;
u64 iova = mapping->mmnode.start << PAGE_SHIFT;
size_t len = mapping->mmnode.size << PAGE_SHIFT;
size_t unmapped_len = 0;
if (WARN_ON(!bo->is_mapped))
if (WARN_ON(!mapping->active))
return;
dev_dbg(pfdev->dev, "unmap: as=%d, iova=%llx, len=%zx", bo->mmu->as, iova, len);
dev_dbg(pfdev->dev, "unmap: as=%d, iova=%llx, len=%zx",
mapping->mmu->as, iova, len);
while (unmapped_len < len) {
size_t unmapped_page;
@ -318,8 +322,9 @@ void panfrost_mmu_unmap(struct panfrost_gem_object *bo)
unmapped_len += pgsize;
}
panfrost_mmu_flush_range(pfdev, bo->mmu, bo->node.start << PAGE_SHIFT, len);
bo->is_mapped = false;
panfrost_mmu_flush_range(pfdev, mapping->mmu,
mapping->mmnode.start << PAGE_SHIFT, len);
mapping->active = false;
}
static void mmu_tlb_inv_context_s1(void *cookie)
@ -394,10 +399,10 @@ void panfrost_mmu_pgtable_free(struct panfrost_file_priv *priv)
free_io_pgtable_ops(mmu->pgtbl_ops);
}
static struct panfrost_gem_object *
addr_to_drm_mm_node(struct panfrost_device *pfdev, int as, u64 addr)
static struct panfrost_gem_mapping *
addr_to_mapping(struct panfrost_device *pfdev, int as, u64 addr)
{
struct panfrost_gem_object *bo = NULL;
struct panfrost_gem_mapping *mapping = NULL;
struct panfrost_file_priv *priv;
struct drm_mm_node *node;
u64 offset = addr >> PAGE_SHIFT;
@ -418,8 +423,9 @@ addr_to_drm_mm_node(struct panfrost_device *pfdev, int as, u64 addr)
drm_mm_for_each_node(node, &priv->mm) {
if (offset >= node->start &&
offset < (node->start + node->size)) {
bo = drm_mm_node_to_panfrost_bo(node);
drm_gem_object_get(&bo->base.base);
mapping = drm_mm_node_to_panfrost_mapping(node);
kref_get(&mapping->refcount);
break;
}
}
@ -427,7 +433,7 @@ addr_to_drm_mm_node(struct panfrost_device *pfdev, int as, u64 addr)
spin_unlock(&priv->mm_lock);
out:
spin_unlock(&pfdev->as_lock);
return bo;
return mapping;
}
#define NUM_FAULT_PAGES (SZ_2M / PAGE_SIZE)
@ -436,28 +442,30 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
u64 addr)
{
int ret, i;
struct panfrost_gem_mapping *bomapping;
struct panfrost_gem_object *bo;
struct address_space *mapping;
pgoff_t page_offset;
struct sg_table *sgt;
struct page **pages;
bo = addr_to_drm_mm_node(pfdev, as, addr);
if (!bo)
bomapping = addr_to_mapping(pfdev, as, addr);
if (!bomapping)
return -ENOENT;
bo = bomapping->obj;
if (!bo->is_heap) {
dev_WARN(pfdev->dev, "matching BO is not heap type (GPU VA = %llx)",
bo->node.start << PAGE_SHIFT);
bomapping->mmnode.start << PAGE_SHIFT);
ret = -EINVAL;
goto err_bo;
}
WARN_ON(bo->mmu->as != as);
WARN_ON(bomapping->mmu->as != as);
/* Assume 2MB alignment and size multiple */
addr &= ~((u64)SZ_2M - 1);
page_offset = addr >> PAGE_SHIFT;
page_offset -= bo->node.start;
page_offset -= bomapping->mmnode.start;
mutex_lock(&bo->base.pages_lock);
@ -509,13 +517,14 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
goto err_map;
}
mmu_map_sg(pfdev, bo->mmu, addr, IOMMU_WRITE | IOMMU_READ | IOMMU_NOEXEC, sgt);
mmu_map_sg(pfdev, bomapping->mmu, addr,
IOMMU_WRITE | IOMMU_READ | IOMMU_NOEXEC, sgt);
bo->is_mapped = true;
bomapping->active = true;
dev_dbg(pfdev->dev, "mapped page fault @ AS%d %llx", as, addr);
drm_gem_object_put_unlocked(&bo->base.base);
panfrost_gem_mapping_put(bomapping);
return 0;

View File

@ -4,12 +4,12 @@
#ifndef __PANFROST_MMU_H__
#define __PANFROST_MMU_H__
struct panfrost_gem_object;
struct panfrost_gem_mapping;
struct panfrost_file_priv;
struct panfrost_mmu;
int panfrost_mmu_map(struct panfrost_gem_object *bo);
void panfrost_mmu_unmap(struct panfrost_gem_object *bo);
int panfrost_mmu_map(struct panfrost_gem_mapping *mapping);
void panfrost_mmu_unmap(struct panfrost_gem_mapping *mapping);
int panfrost_mmu_init(struct panfrost_device *pfdev);
void panfrost_mmu_fini(struct panfrost_device *pfdev);

View File

@ -25,7 +25,7 @@
#define V4_SHADERS_PER_COREGROUP 4
struct panfrost_perfcnt {
struct panfrost_gem_object *bo;
struct panfrost_gem_mapping *mapping;
size_t bosize;
void *buf;
struct panfrost_file_priv *user;
@ -49,7 +49,7 @@ static int panfrost_perfcnt_dump_locked(struct panfrost_device *pfdev)
int ret;
reinit_completion(&pfdev->perfcnt->dump_comp);
gpuva = pfdev->perfcnt->bo->node.start << PAGE_SHIFT;
gpuva = pfdev->perfcnt->mapping->mmnode.start << PAGE_SHIFT;
gpu_write(pfdev, GPU_PERFCNT_BASE_LO, gpuva);
gpu_write(pfdev, GPU_PERFCNT_BASE_HI, gpuva >> 32);
gpu_write(pfdev, GPU_INT_CLEAR,
@ -89,17 +89,22 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
if (IS_ERR(bo))
return PTR_ERR(bo);
perfcnt->bo = to_panfrost_bo(&bo->base);
/* Map the perfcnt buf in the address space attached to file_priv. */
ret = panfrost_gem_open(&perfcnt->bo->base.base, file_priv);
ret = panfrost_gem_open(&bo->base, file_priv);
if (ret)
goto err_put_bo;
perfcnt->mapping = panfrost_gem_mapping_get(to_panfrost_bo(&bo->base),
user);
if (!perfcnt->mapping) {
ret = -EINVAL;
goto err_close_bo;
}
perfcnt->buf = drm_gem_shmem_vmap(&bo->base);
if (IS_ERR(perfcnt->buf)) {
ret = PTR_ERR(perfcnt->buf);
goto err_close_bo;
goto err_put_mapping;
}
/*
@ -154,12 +159,17 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
if (panfrost_has_hw_issue(pfdev, HW_ISSUE_8186))
gpu_write(pfdev, GPU_PRFCNT_TILER_EN, 0xffffffff);
/* The BO ref is retained by the mapping. */
drm_gem_object_put_unlocked(&bo->base);
return 0;
err_vunmap:
drm_gem_shmem_vunmap(&perfcnt->bo->base.base, perfcnt->buf);
drm_gem_shmem_vunmap(&bo->base, perfcnt->buf);
err_put_mapping:
panfrost_gem_mapping_put(perfcnt->mapping);
err_close_bo:
panfrost_gem_close(&perfcnt->bo->base.base, file_priv);
panfrost_gem_close(&bo->base, file_priv);
err_put_bo:
drm_gem_object_put_unlocked(&bo->base);
return ret;
@ -182,11 +192,11 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev,
GPU_PERFCNT_CFG_MODE(GPU_PERFCNT_CFG_MODE_OFF));
perfcnt->user = NULL;
drm_gem_shmem_vunmap(&perfcnt->bo->base.base, perfcnt->buf);
drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, perfcnt->buf);
perfcnt->buf = NULL;
panfrost_gem_close(&perfcnt->bo->base.base, file_priv);
drm_gem_object_put_unlocked(&perfcnt->bo->base.base);
perfcnt->bo = NULL;
panfrost_gem_close(&perfcnt->mapping->obj->base.base, file_priv);
panfrost_gem_mapping_put(perfcnt->mapping);
perfcnt->mapping = NULL;
pm_runtime_mark_last_busy(pfdev->dev);
pm_runtime_put_autosuspend(pfdev->dev);

View File

@ -294,9 +294,10 @@ static inline u16 volt2reg(int channel, long volt, u8 bypass_attn)
long reg;
if (bypass_attn & (1 << channel))
reg = (volt * 1024) / 2250;
reg = DIV_ROUND_CLOSEST(volt * 1024, 2250);
else
reg = (volt * r[1] * 1024) / ((r[0] + r[1]) * 2250);
reg = DIV_ROUND_CLOSEST(volt * r[1] * 1024,
(r[0] + r[1]) * 2250);
return clamp_val(reg, 0, 1023) & (0xff << 2);
}

View File

@ -51,6 +51,7 @@ struct hwmon_device_attribute {
#define to_hwmon_attr(d) \
container_of(d, struct hwmon_device_attribute, dev_attr)
#define to_dev_attr(a) container_of(a, struct device_attribute, attr)
/*
* Thermal zone information
@ -58,7 +59,7 @@ struct hwmon_device_attribute {
* also provides the sensor index.
*/
struct hwmon_thermal_data {
struct hwmon_device *hwdev; /* Reference to hwmon device */
struct device *dev; /* Reference to hwmon device */
int index; /* sensor index */
};
@ -95,9 +96,27 @@ static const struct attribute_group *hwmon_dev_attr_groups[] = {
NULL
};
static void hwmon_free_attrs(struct attribute **attrs)
{
int i;
for (i = 0; attrs[i]; i++) {
struct device_attribute *dattr = to_dev_attr(attrs[i]);
struct hwmon_device_attribute *hattr = to_hwmon_attr(dattr);
kfree(hattr);
}
kfree(attrs);
}
static void hwmon_dev_release(struct device *dev)
{
kfree(to_hwmon_device(dev));
struct hwmon_device *hwdev = to_hwmon_device(dev);
if (hwdev->group.attrs)
hwmon_free_attrs(hwdev->group.attrs);
kfree(hwdev->groups);
kfree(hwdev);
}
static struct class hwmon_class = {
@ -119,11 +138,11 @@ static DEFINE_IDA(hwmon_ida);
static int hwmon_thermal_get_temp(void *data, int *temp)
{
struct hwmon_thermal_data *tdata = data;
struct hwmon_device *hwdev = tdata->hwdev;
struct hwmon_device *hwdev = to_hwmon_device(tdata->dev);
int ret;
long t;
ret = hwdev->chip->ops->read(&hwdev->dev, hwmon_temp, hwmon_temp_input,
ret = hwdev->chip->ops->read(tdata->dev, hwmon_temp, hwmon_temp_input,
tdata->index, &t);
if (ret < 0)
return ret;
@ -137,8 +156,7 @@ static const struct thermal_zone_of_device_ops hwmon_thermal_ops = {
.get_temp = hwmon_thermal_get_temp,
};
static int hwmon_thermal_add_sensor(struct device *dev,
struct hwmon_device *hwdev, int index)
static int hwmon_thermal_add_sensor(struct device *dev, int index)
{
struct hwmon_thermal_data *tdata;
struct thermal_zone_device *tzd;
@ -147,10 +165,10 @@ static int hwmon_thermal_add_sensor(struct device *dev,
if (!tdata)
return -ENOMEM;
tdata->hwdev = hwdev;
tdata->dev = dev;
tdata->index = index;
tzd = devm_thermal_zone_of_sensor_register(&hwdev->dev, index, tdata,
tzd = devm_thermal_zone_of_sensor_register(dev, index, tdata,
&hwmon_thermal_ops);
/*
* If CONFIG_THERMAL_OF is disabled, this returns -ENODEV,
@ -162,8 +180,7 @@ static int hwmon_thermal_add_sensor(struct device *dev,
return 0;
}
#else
static int hwmon_thermal_add_sensor(struct device *dev,
struct hwmon_device *hwdev, int index)
static int hwmon_thermal_add_sensor(struct device *dev, int index)
{
return 0;
}
@ -250,8 +267,7 @@ static bool is_string_attr(enum hwmon_sensor_types type, u32 attr)
(type == hwmon_fan && attr == hwmon_fan_label);
}
static struct attribute *hwmon_genattr(struct device *dev,
const void *drvdata,
static struct attribute *hwmon_genattr(const void *drvdata,
enum hwmon_sensor_types type,
u32 attr,
int index,
@ -279,7 +295,7 @@ static struct attribute *hwmon_genattr(struct device *dev,
if ((mode & 0222) && !ops->write)
return ERR_PTR(-EINVAL);
hattr = devm_kzalloc(dev, sizeof(*hattr), GFP_KERNEL);
hattr = kzalloc(sizeof(*hattr), GFP_KERNEL);
if (!hattr)
return ERR_PTR(-ENOMEM);
@ -492,8 +508,7 @@ static int hwmon_num_channel_attrs(const struct hwmon_channel_info *info)
return n;
}
static int hwmon_genattrs(struct device *dev,
const void *drvdata,
static int hwmon_genattrs(const void *drvdata,
struct attribute **attrs,
const struct hwmon_ops *ops,
const struct hwmon_channel_info *info)
@ -519,7 +534,7 @@ static int hwmon_genattrs(struct device *dev,
attr_mask &= ~BIT(attr);
if (attr >= template_size)
return -EINVAL;
a = hwmon_genattr(dev, drvdata, info->type, attr, i,
a = hwmon_genattr(drvdata, info->type, attr, i,
templates[attr], ops);
if (IS_ERR(a)) {
if (PTR_ERR(a) != -ENOENT)
@ -533,8 +548,7 @@ static int hwmon_genattrs(struct device *dev,
}
static struct attribute **
__hwmon_create_attrs(struct device *dev, const void *drvdata,
const struct hwmon_chip_info *chip)
__hwmon_create_attrs(const void *drvdata, const struct hwmon_chip_info *chip)
{
int ret, i, aindex = 0, nattrs = 0;
struct attribute **attrs;
@ -545,15 +559,17 @@ __hwmon_create_attrs(struct device *dev, const void *drvdata,
if (nattrs == 0)
return ERR_PTR(-EINVAL);
attrs = devm_kcalloc(dev, nattrs + 1, sizeof(*attrs), GFP_KERNEL);
attrs = kcalloc(nattrs + 1, sizeof(*attrs), GFP_KERNEL);
if (!attrs)
return ERR_PTR(-ENOMEM);
for (i = 0; chip->info[i]; i++) {
ret = hwmon_genattrs(dev, drvdata, &attrs[aindex], chip->ops,
ret = hwmon_genattrs(drvdata, &attrs[aindex], chip->ops,
chip->info[i]);
if (ret < 0)
if (ret < 0) {
hwmon_free_attrs(attrs);
return ERR_PTR(ret);
}
aindex += ret;
}
@ -595,14 +611,13 @@ __hwmon_device_register(struct device *dev, const char *name, void *drvdata,
for (i = 0; groups[i]; i++)
ngroups++;
hwdev->groups = devm_kcalloc(dev, ngroups, sizeof(*groups),
GFP_KERNEL);
hwdev->groups = kcalloc(ngroups, sizeof(*groups), GFP_KERNEL);
if (!hwdev->groups) {
err = -ENOMEM;
goto free_hwmon;
}
attrs = __hwmon_create_attrs(dev, drvdata, chip);
attrs = __hwmon_create_attrs(drvdata, chip);
if (IS_ERR(attrs)) {
err = PTR_ERR(attrs);
goto free_hwmon;
@ -647,8 +662,7 @@ __hwmon_device_register(struct device *dev, const char *name, void *drvdata,
hwmon_temp_input, j))
continue;
if (info[i]->config[j] & HWMON_T_INPUT) {
err = hwmon_thermal_add_sensor(dev,
hwdev, j);
err = hwmon_thermal_add_sensor(hdev, j);
if (err) {
device_unregister(hdev);
/*
@ -667,7 +681,7 @@ __hwmon_device_register(struct device *dev, const char *name, void *drvdata,
return hdev;
free_hwmon:
kfree(hwdev);
hwmon_dev_release(hdev);
ida_remove:
ida_simple_remove(&hwmon_ida, id);
return ERR_PTR(err);

View File

@ -23,8 +23,8 @@
static const u8 REG_VOLTAGE[5] = { 0x09, 0x0a, 0x0c, 0x0d, 0x0e };
static const u8 REG_VOLTAGE_LIMIT_LSB[2][5] = {
{ 0x40, 0x00, 0x42, 0x44, 0x46 },
{ 0x3f, 0x00, 0x41, 0x43, 0x45 },
{ 0x46, 0x00, 0x40, 0x42, 0x44 },
{ 0x45, 0x00, 0x3f, 0x41, 0x43 },
};
static const u8 REG_VOLTAGE_LIMIT_MSB[5] = { 0x48, 0x00, 0x47, 0x47, 0x48 };
@ -58,6 +58,8 @@ static const u8 REG_VOLTAGE_LIMIT_MSB_SHIFT[2][5] = {
struct nct7802_data {
struct regmap *regmap;
struct mutex access_lock; /* for multi-byte read and write operations */
u8 in_status;
struct mutex in_alarm_lock;
};
static ssize_t temp_type_show(struct device *dev,
@ -368,6 +370,66 @@ static ssize_t in_store(struct device *dev, struct device_attribute *attr,
return err ? : count;
}
static ssize_t in_alarm_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct sensor_device_attribute_2 *sattr = to_sensor_dev_attr_2(attr);
struct nct7802_data *data = dev_get_drvdata(dev);
int volt, min, max, ret;
unsigned int val;
mutex_lock(&data->in_alarm_lock);
/*
* The SMI Voltage status register is the only register giving a status
* for voltages. A bit is set for each input crossing a threshold, in
* both direction, but the "inside" or "outside" limits info is not
* available. Also this register is cleared on read.
* Note: this is not explicitly spelled out in the datasheet, but
* from experiment.
* To deal with this we use a status cache with one validity bit and
* one status bit for each input. Validity is cleared at startup and
* each time the register reports a change, and the status is processed
* by software based on current input value and limits.
*/
ret = regmap_read(data->regmap, 0x1e, &val); /* SMI Voltage status */
if (ret < 0)
goto abort;
/* invalidate cached status for all inputs crossing a threshold */
data->in_status &= ~((val & 0x0f) << 4);
/* if cached status for requested input is invalid, update it */
if (!(data->in_status & (0x10 << sattr->index))) {
ret = nct7802_read_voltage(data, sattr->nr, 0);
if (ret < 0)
goto abort;
volt = ret;
ret = nct7802_read_voltage(data, sattr->nr, 1);
if (ret < 0)
goto abort;
min = ret;
ret = nct7802_read_voltage(data, sattr->nr, 2);
if (ret < 0)
goto abort;
max = ret;
if (volt < min || volt > max)
data->in_status |= (1 << sattr->index);
else
data->in_status &= ~(1 << sattr->index);
data->in_status |= 0x10 << sattr->index;
}
ret = sprintf(buf, "%u\n", !!(data->in_status & (1 << sattr->index)));
abort:
mutex_unlock(&data->in_alarm_lock);
return ret;
}
static ssize_t temp_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
@ -660,7 +722,7 @@ static const struct attribute_group nct7802_temp_group = {
static SENSOR_DEVICE_ATTR_2_RO(in0_input, in, 0, 0);
static SENSOR_DEVICE_ATTR_2_RW(in0_min, in, 0, 1);
static SENSOR_DEVICE_ATTR_2_RW(in0_max, in, 0, 2);
static SENSOR_DEVICE_ATTR_2_RO(in0_alarm, alarm, 0x1e, 3);
static SENSOR_DEVICE_ATTR_2_RO(in0_alarm, in_alarm, 0, 3);
static SENSOR_DEVICE_ATTR_2_RW(in0_beep, beep, 0x5a, 3);
static SENSOR_DEVICE_ATTR_2_RO(in1_input, in, 1, 0);
@ -668,19 +730,19 @@ static SENSOR_DEVICE_ATTR_2_RO(in1_input, in, 1, 0);
static SENSOR_DEVICE_ATTR_2_RO(in2_input, in, 2, 0);
static SENSOR_DEVICE_ATTR_2_RW(in2_min, in, 2, 1);
static SENSOR_DEVICE_ATTR_2_RW(in2_max, in, 2, 2);
static SENSOR_DEVICE_ATTR_2_RO(in2_alarm, alarm, 0x1e, 0);
static SENSOR_DEVICE_ATTR_2_RO(in2_alarm, in_alarm, 2, 0);
static SENSOR_DEVICE_ATTR_2_RW(in2_beep, beep, 0x5a, 0);
static SENSOR_DEVICE_ATTR_2_RO(in3_input, in, 3, 0);
static SENSOR_DEVICE_ATTR_2_RW(in3_min, in, 3, 1);
static SENSOR_DEVICE_ATTR_2_RW(in3_max, in, 3, 2);
static SENSOR_DEVICE_ATTR_2_RO(in3_alarm, alarm, 0x1e, 1);
static SENSOR_DEVICE_ATTR_2_RO(in3_alarm, in_alarm, 3, 1);
static SENSOR_DEVICE_ATTR_2_RW(in3_beep, beep, 0x5a, 1);
static SENSOR_DEVICE_ATTR_2_RO(in4_input, in, 4, 0);
static SENSOR_DEVICE_ATTR_2_RW(in4_min, in, 4, 1);
static SENSOR_DEVICE_ATTR_2_RW(in4_max, in, 4, 2);
static SENSOR_DEVICE_ATTR_2_RO(in4_alarm, alarm, 0x1e, 2);
static SENSOR_DEVICE_ATTR_2_RO(in4_alarm, in_alarm, 4, 2);
static SENSOR_DEVICE_ATTR_2_RW(in4_beep, beep, 0x5a, 2);
static struct attribute *nct7802_in_attrs[] = {
@ -1011,6 +1073,7 @@ static int nct7802_probe(struct i2c_client *client,
return PTR_ERR(data->regmap);
mutex_init(&data->access_lock);
mutex_init(&data->in_alarm_lock);
ret = nct7802_init_chip(data);
if (ret < 0)

View File

@ -484,10 +484,7 @@ static int evdev_open(struct inode *inode, struct file *file)
struct evdev_client *client;
int error;
client = kzalloc(struct_size(client, buffer, bufsize),
GFP_KERNEL | __GFP_NOWARN);
if (!client)
client = vzalloc(struct_size(client, buffer, bufsize));
client = kvzalloc(struct_size(client, buffer, bufsize), GFP_KERNEL);
if (!client)
return -ENOMEM;

View File

@ -336,7 +336,8 @@ static int keyspan_setup(struct usb_device* dev)
int retval = 0;
retval = usb_control_msg(dev, usb_sndctrlpipe(dev, 0),
0x11, 0x40, 0x5601, 0x0, NULL, 0, 0);
0x11, 0x40, 0x5601, 0x0, NULL, 0,
USB_CTRL_SET_TIMEOUT);
if (retval) {
dev_dbg(&dev->dev, "%s - failed to set bit rate due to error: %d\n",
__func__, retval);
@ -344,7 +345,8 @@ static int keyspan_setup(struct usb_device* dev)
}
retval = usb_control_msg(dev, usb_sndctrlpipe(dev, 0),
0x44, 0x40, 0x0, 0x0, NULL, 0, 0);
0x44, 0x40, 0x0, 0x0, NULL, 0,
USB_CTRL_SET_TIMEOUT);
if (retval) {
dev_dbg(&dev->dev, "%s - failed to set resume sensitivity due to error: %d\n",
__func__, retval);
@ -352,7 +354,8 @@ static int keyspan_setup(struct usb_device* dev)
}
retval = usb_control_msg(dev, usb_sndctrlpipe(dev, 0),
0x22, 0x40, 0x0, 0x0, NULL, 0, 0);
0x22, 0x40, 0x0, 0x0, NULL, 0,
USB_CTRL_SET_TIMEOUT);
if (retval) {
dev_dbg(&dev->dev, "%s - failed to turn receive on due to error: %d\n",
__func__, retval);

View File

@ -108,9 +108,16 @@ static int max77650_onkey_probe(struct platform_device *pdev)
return input_register_device(onkey->input);
}
static const struct of_device_id max77650_onkey_of_match[] = {
{ .compatible = "maxim,max77650-onkey" },
{ }
};
MODULE_DEVICE_TABLE(of, max77650_onkey_of_match);
static struct platform_driver max77650_onkey_driver = {
.driver = {
.name = "max77650-onkey",
.of_match_table = max77650_onkey_of_match,
},
.probe = max77650_onkey_probe,
};

View File

@ -90,7 +90,7 @@ static int pm8xxx_vib_set(struct pm8xxx_vib *vib, bool on)
if (regs->enable_mask)
rc = regmap_update_bits(vib->regmap, regs->enable_addr,
on ? regs->enable_mask : 0, val);
regs->enable_mask, on ? ~0 : 0);
return rc;
}

View File

@ -24,6 +24,12 @@
#define F54_NUM_TX_OFFSET 1
#define F54_NUM_RX_OFFSET 0
/*
* The smbus protocol can read only 32 bytes max at a time.
* But this should be fine for i2c/spi as well.
*/
#define F54_REPORT_DATA_SIZE 32
/* F54 commands */
#define F54_GET_REPORT 1
#define F54_FORCE_CAL 2
@ -526,6 +532,7 @@ static void rmi_f54_work(struct work_struct *work)
int report_size;
u8 command;
int error;
int i;
report_size = rmi_f54_get_report_size(f54);
if (report_size == 0) {
@ -558,23 +565,27 @@ static void rmi_f54_work(struct work_struct *work)
rmi_dbg(RMI_DEBUG_FN, &fn->dev, "Get report command completed, reading data\n");
fifo[0] = 0;
fifo[1] = 0;
error = rmi_write_block(fn->rmi_dev,
fn->fd.data_base_addr + F54_FIFO_OFFSET,
fifo, sizeof(fifo));
if (error) {
dev_err(&fn->dev, "Failed to set fifo start offset\n");
goto abort;
}
for (i = 0; i < report_size; i += F54_REPORT_DATA_SIZE) {
int size = min(F54_REPORT_DATA_SIZE, report_size - i);
error = rmi_read_block(fn->rmi_dev, fn->fd.data_base_addr +
F54_REPORT_DATA_OFFSET, f54->report_data,
report_size);
if (error) {
dev_err(&fn->dev, "%s: read [%d bytes] returned %d\n",
__func__, report_size, error);
goto abort;
fifo[0] = i & 0xff;
fifo[1] = i >> 8;
error = rmi_write_block(fn->rmi_dev,
fn->fd.data_base_addr + F54_FIFO_OFFSET,
fifo, sizeof(fifo));
if (error) {
dev_err(&fn->dev, "Failed to set fifo start offset\n");
goto abort;
}
error = rmi_read_block(fn->rmi_dev, fn->fd.data_base_addr +
F54_REPORT_DATA_OFFSET,
f54->report_data + i, size);
if (error) {
dev_err(&fn->dev, "%s: read [%d bytes] returned %d\n",
__func__, size, error);
goto abort;
}
}
abort:

View File

@ -163,6 +163,7 @@ static int rmi_smb_write_block(struct rmi_transport_dev *xport, u16 rmiaddr,
/* prepare to write next block of bytes */
cur_len -= SMB_MAX_COUNT;
databuff += SMB_MAX_COUNT;
rmiaddr += SMB_MAX_COUNT;
}
exit:
mutex_unlock(&rmi_smb->page_mutex);
@ -214,6 +215,7 @@ static int rmi_smb_read_block(struct rmi_transport_dev *xport, u16 rmiaddr,
/* prepare to read next block of bytes */
cur_len -= SMB_MAX_COUNT;
databuff += SMB_MAX_COUNT;
rmiaddr += SMB_MAX_COUNT;
}
retval = 0;

View File

@ -1713,7 +1713,7 @@ aiptek_probe(struct usb_interface *intf, const struct usb_device_id *id)
aiptek->inputdev = inputdev;
aiptek->intf = intf;
aiptek->ifnum = intf->altsetting[0].desc.bInterfaceNumber;
aiptek->ifnum = intf->cur_altsetting->desc.bInterfaceNumber;
aiptek->inDelay = 0;
aiptek->endDelay = 0;
aiptek->previousJitterable = 0;
@ -1802,14 +1802,14 @@ aiptek_probe(struct usb_interface *intf, const struct usb_device_id *id)
input_set_abs_params(inputdev, ABS_WHEEL, AIPTEK_WHEEL_MIN, AIPTEK_WHEEL_MAX - 1, 0, 0);
/* Verify that a device really has an endpoint */
if (intf->altsetting[0].desc.bNumEndpoints < 1) {
if (intf->cur_altsetting->desc.bNumEndpoints < 1) {
dev_err(&intf->dev,
"interface has %d endpoints, but must have minimum 1\n",
intf->altsetting[0].desc.bNumEndpoints);
intf->cur_altsetting->desc.bNumEndpoints);
err = -EINVAL;
goto fail3;
}
endpoint = &intf->altsetting[0].endpoint[0].desc;
endpoint = &intf->cur_altsetting->endpoint[0].desc;
/* Go set up our URB, which is called when the tablet receives
* input.

View File

@ -875,18 +875,14 @@ static int gtco_probe(struct usb_interface *usbinterface,
}
/* Sanity check that a device has an endpoint */
if (usbinterface->altsetting[0].desc.bNumEndpoints < 1) {
if (usbinterface->cur_altsetting->desc.bNumEndpoints < 1) {
dev_err(&usbinterface->dev,
"Invalid number of endpoints\n");
error = -EINVAL;
goto err_free_urb;
}
/*
* The endpoint is always altsetting 0, we know this since we know
* this device only has one interrupt endpoint
*/
endpoint = &usbinterface->altsetting[0].endpoint[0].desc;
endpoint = &usbinterface->cur_altsetting->endpoint[0].desc;
/* Some debug */
dev_dbg(&usbinterface->dev, "gtco # interfaces: %d\n", usbinterface->num_altsetting);
@ -896,7 +892,8 @@ static int gtco_probe(struct usb_interface *usbinterface,
if (usb_endpoint_xfer_int(endpoint))
dev_dbg(&usbinterface->dev, "endpoint: we have interrupt endpoint\n");
dev_dbg(&usbinterface->dev, "endpoint extra len:%d\n", usbinterface->altsetting[0].extralen);
dev_dbg(&usbinterface->dev, "interface extra len:%d\n",
usbinterface->cur_altsetting->extralen);
/*
* Find the HID descriptor so we can find out the size of the
@ -973,8 +970,6 @@ static int gtco_probe(struct usb_interface *usbinterface,
input_dev->dev.parent = &usbinterface->dev;
/* Setup the URB, it will be posted later on open of input device */
endpoint = &usbinterface->altsetting[0].endpoint[0].desc;
usb_fill_int_urb(gtco->urbinfo,
udev,
usb_rcvintpipe(udev,

View File

@ -275,7 +275,7 @@ static int pegasus_probe(struct usb_interface *intf,
return -ENODEV;
/* Sanity check that the device has an endpoint */
if (intf->altsetting[0].desc.bNumEndpoints < 1) {
if (intf->cur_altsetting->desc.bNumEndpoints < 1) {
dev_err(&intf->dev, "Invalid number of endpoints\n");
return -EINVAL;
}

View File

@ -237,6 +237,7 @@ static int sun4i_ts_probe(struct platform_device *pdev)
struct device *dev = &pdev->dev;
struct device_node *np = dev->of_node;
struct device *hwmon;
struct thermal_zone_device *thermal;
int error;
u32 reg;
bool ts_attached;
@ -355,7 +356,10 @@ static int sun4i_ts_probe(struct platform_device *pdev)
if (IS_ERR(hwmon))
return PTR_ERR(hwmon);
devm_thermal_zone_of_sensor_register(ts->dev, 0, ts, &sun4i_ts_tz_ops);
thermal = devm_thermal_zone_of_sensor_register(ts->dev, 0, ts,
&sun4i_ts_tz_ops);
if (IS_ERR(thermal))
return PTR_ERR(thermal);
writel(TEMP_IRQ_EN(1), ts->base + TP_INT_FIFOC);

View File

@ -661,7 +661,7 @@ static int sur40_probe(struct usb_interface *interface,
int error;
/* Check if we really have the right interface. */
iface_desc = &interface->altsetting[0];
iface_desc = interface->cur_altsetting;
if (iface_desc->desc.bInterfaceClass != 0xFF)
return -ENODEV;

View File

@ -1655,27 +1655,39 @@ static int iommu_pc_get_set_reg(struct amd_iommu *iommu, u8 bank, u8 cntr,
static void init_iommu_perf_ctr(struct amd_iommu *iommu)
{
struct pci_dev *pdev = iommu->dev;
u64 val = 0xabcd, val2 = 0;
u64 val = 0xabcd, val2 = 0, save_reg = 0;
if (!iommu_feature(iommu, FEATURE_PC))
return;
amd_iommu_pc_present = true;
/* save the value to restore, if writable */
if (iommu_pc_get_set_reg(iommu, 0, 0, 0, &save_reg, false))
goto pc_false;
/* Check if the performance counters can be written to */
if ((iommu_pc_get_set_reg(iommu, 0, 0, 0, &val, true)) ||
(iommu_pc_get_set_reg(iommu, 0, 0, 0, &val2, false)) ||
(val != val2)) {
pci_err(pdev, "Unable to write to IOMMU perf counter.\n");
amd_iommu_pc_present = false;
return;
}
(val != val2))
goto pc_false;
/* restore */
if (iommu_pc_get_set_reg(iommu, 0, 0, 0, &save_reg, true))
goto pc_false;
pci_info(pdev, "IOMMU performance counters supported\n");
val = readl(iommu->mmio_base + MMIO_CNTR_CONF_OFFSET);
iommu->max_banks = (u8) ((val >> 12) & 0x3f);
iommu->max_counters = (u8) ((val >> 7) & 0xf);
return;
pc_false:
pci_err(pdev, "Unable to read/write to IOMMU perf counter.\n");
amd_iommu_pc_present = false;
return;
}
static ssize_t amd_iommu_show_cap(struct device *dev,

View File

@ -5163,7 +5163,8 @@ static void dmar_remove_one_dev_info(struct device *dev)
spin_lock_irqsave(&device_domain_lock, flags);
info = dev->archdata.iommu;
if (info)
if (info && info != DEFER_DEVICE_DOMAIN_INFO
&& info != DUMMY_DEVICE_DOMAIN_INFO)
__dmar_remove_one_dev_info(info);
spin_unlock_irqrestore(&device_domain_lock, flags);
}

View File

@ -493,16 +493,17 @@ static int as3645a_parse_node(struct as3645a *flash,
switch (id) {
case AS_LED_FLASH:
flash->flash_node = child;
fwnode_handle_get(child);
break;
case AS_LED_INDICATOR:
flash->indicator_node = child;
fwnode_handle_get(child);
break;
default:
dev_warn(&flash->client->dev,
"unknown LED %u encountered, ignoring\n", id);
break;
}
fwnode_handle_get(child);
}
if (!flash->flash_node) {

View File

@ -151,9 +151,14 @@ static struct gpio_leds_priv *gpio_leds_create(struct platform_device *pdev)
struct gpio_led led = {};
const char *state = NULL;
/*
* Acquire gpiod from DT with uninitialized label, which
* will be updated after LED class device is registered,
* Only then the final LED name is known.
*/
led.gpiod = devm_fwnode_get_gpiod_from_child(dev, NULL, child,
GPIOD_ASIS,
led.name);
NULL);
if (IS_ERR(led.gpiod)) {
fwnode_handle_put(child);
return ERR_CAST(led.gpiod);
@ -186,6 +191,9 @@ static struct gpio_leds_priv *gpio_leds_create(struct platform_device *pdev)
fwnode_handle_put(child);
return ERR_PTR(ret);
}
/* Set gpiod label to match the corresponding LED name. */
gpiod_set_consumer_name(led_dat->gpiod,
led_dat->cdev.dev->kobj.name);
priv->num_leds++;
}

View File

@ -1,6 +1,7 @@
// SPDX-License-Identifier: GPL-2.0
// TI LM3532 LED driver
// Copyright (C) 2019 Texas Instruments Incorporated - http://www.ti.com/
// http://www.ti.com/lit/ds/symlink/lm3532.pdf
#include <linux/i2c.h>
#include <linux/leds.h>
@ -623,7 +624,7 @@ static int lm3532_parse_node(struct lm3532_data *priv)
led->num_leds = fwnode_property_count_u32(child, "led-sources");
if (led->num_leds > LM3532_MAX_LED_STRINGS) {
dev_err(&priv->client->dev, "To many LED string defined\n");
dev_err(&priv->client->dev, "Too many LED string defined\n");
continue;
}

View File

@ -135,9 +135,16 @@ static int max77650_led_probe(struct platform_device *pdev)
return rv;
}
static const struct of_device_id max77650_led_of_match[] = {
{ .compatible = "maxim,max77650-led" },
{ }
};
MODULE_DEVICE_TABLE(of, max77650_led_of_match);
static struct platform_driver max77650_led_driver = {
.driver = {
.name = "max77650-led",
.of_match_table = max77650_led_of_match,
},
.probe = max77650_led_probe,
};

View File

@ -21,7 +21,6 @@ static void rb532_led_set(struct led_classdev *cdev,
{
if (brightness)
set_latch_u5(LO_ULED, 0);
else
set_latch_u5(0, LO_ULED);
}

View File

@ -455,7 +455,7 @@ static void __exit pattern_trig_exit(void)
module_init(pattern_trig_init);
module_exit(pattern_trig_exit);
MODULE_AUTHOR("Raphael Teysseyre <rteysseyre@gmail.com");
MODULE_AUTHOR("Baolin Wang <baolin.wang@linaro.org");
MODULE_AUTHOR("Raphael Teysseyre <rteysseyre@gmail.com>");
MODULE_AUTHOR("Baolin Wang <baolin.wang@linaro.org>");
MODULE_DESCRIPTION("LED Pattern trigger");
MODULE_LICENSE("GPL v2");

View File

@ -386,7 +386,7 @@ static void tegra_sdhci_reset(struct sdhci_host *host, u8 mask)
misc_ctrl |= SDHCI_MISC_CTRL_ENABLE_DDR50;
if (soc_data->nvquirks & NVQUIRK_ENABLE_SDR104)
misc_ctrl |= SDHCI_MISC_CTRL_ENABLE_SDR104;
if (soc_data->nvquirks & SDHCI_MISC_CTRL_ENABLE_SDR50)
if (soc_data->nvquirks & NVQUIRK_ENABLE_SDR50)
clk_ctrl |= SDHCI_CLOCK_CTRL_SDR50_TUNING_OVERRIDE;
}

View File

@ -3913,11 +3913,13 @@ int sdhci_setup_host(struct sdhci_host *host)
if (host->ops->get_min_clock)
mmc->f_min = host->ops->get_min_clock(host);
else if (host->version >= SDHCI_SPEC_300) {
if (host->clk_mul) {
mmc->f_min = (host->max_clk * host->clk_mul) / 1024;
if (host->clk_mul)
max_clk = host->max_clk * host->clk_mul;
} else
mmc->f_min = host->max_clk / SDHCI_MAX_DIV_SPEC_300;
/*
* Divided Clock Mode minimum clock rate is always less than
* Programmable Clock Mode minimum clock rate.
*/
mmc->f_min = host->max_clk / SDHCI_MAX_DIV_SPEC_300;
} else
mmc->f_min = host->max_clk / SDHCI_MAX_DIV_SPEC_200;

View File

@ -240,28 +240,21 @@ static void sdhci_am654_write_b(struct sdhci_host *host, u8 val, int reg)
writeb(val, host->ioaddr + reg);
}
static struct sdhci_ops sdhci_am654_ops = {
.get_max_clock = sdhci_pltfm_clk_get_max_clock,
.get_timeout_clock = sdhci_pltfm_clk_get_max_clock,
.set_uhs_signaling = sdhci_set_uhs_signaling,
.set_bus_width = sdhci_set_bus_width,
.set_power = sdhci_am654_set_power,
.set_clock = sdhci_am654_set_clock,
.write_b = sdhci_am654_write_b,
.reset = sdhci_reset,
};
static int sdhci_am654_execute_tuning(struct mmc_host *mmc, u32 opcode)
{
struct sdhci_host *host = mmc_priv(mmc);
int err = sdhci_execute_tuning(mmc, opcode);
static const struct sdhci_pltfm_data sdhci_am654_pdata = {
.ops = &sdhci_am654_ops,
.quirks = SDHCI_QUIRK_INVERTED_WRITE_PROTECT |
SDHCI_QUIRK_MULTIBLOCK_READ_ACMD12,
.quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN,
};
if (err)
return err;
/*
* Tuning data remains in the buffer after tuning.
* Do a command and data reset to get rid of it
*/
sdhci_reset(host, SDHCI_RESET_CMD | SDHCI_RESET_DATA);
static const struct sdhci_am654_driver_data sdhci_am654_drvdata = {
.pdata = &sdhci_am654_pdata,
.flags = IOMUX_PRESENT | FREQSEL_2_BIT | STRBSEL_4_BIT | DLL_PRESENT,
};
return 0;
}
static u32 sdhci_am654_cqhci_irq(struct sdhci_host *host, u32 intmask)
{
@ -276,6 +269,29 @@ static u32 sdhci_am654_cqhci_irq(struct sdhci_host *host, u32 intmask)
return 0;
}
static struct sdhci_ops sdhci_am654_ops = {
.get_max_clock = sdhci_pltfm_clk_get_max_clock,
.get_timeout_clock = sdhci_pltfm_clk_get_max_clock,
.set_uhs_signaling = sdhci_set_uhs_signaling,
.set_bus_width = sdhci_set_bus_width,
.set_power = sdhci_am654_set_power,
.set_clock = sdhci_am654_set_clock,
.write_b = sdhci_am654_write_b,
.irq = sdhci_am654_cqhci_irq,
.reset = sdhci_reset,
};
static const struct sdhci_pltfm_data sdhci_am654_pdata = {
.ops = &sdhci_am654_ops,
.quirks = SDHCI_QUIRK_MULTIBLOCK_READ_ACMD12,
.quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN,
};
static const struct sdhci_am654_driver_data sdhci_am654_drvdata = {
.pdata = &sdhci_am654_pdata,
.flags = IOMUX_PRESENT | FREQSEL_2_BIT | STRBSEL_4_BIT | DLL_PRESENT,
};
static struct sdhci_ops sdhci_j721e_8bit_ops = {
.get_max_clock = sdhci_pltfm_clk_get_max_clock,
.get_timeout_clock = sdhci_pltfm_clk_get_max_clock,
@ -290,8 +306,7 @@ static struct sdhci_ops sdhci_j721e_8bit_ops = {
static const struct sdhci_pltfm_data sdhci_j721e_8bit_pdata = {
.ops = &sdhci_j721e_8bit_ops,
.quirks = SDHCI_QUIRK_INVERTED_WRITE_PROTECT |
SDHCI_QUIRK_MULTIBLOCK_READ_ACMD12,
.quirks = SDHCI_QUIRK_MULTIBLOCK_READ_ACMD12,
.quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN,
};
@ -314,8 +329,7 @@ static struct sdhci_ops sdhci_j721e_4bit_ops = {
static const struct sdhci_pltfm_data sdhci_j721e_4bit_pdata = {
.ops = &sdhci_j721e_4bit_ops,
.quirks = SDHCI_QUIRK_INVERTED_WRITE_PROTECT |
SDHCI_QUIRK_MULTIBLOCK_READ_ACMD12,
.quirks = SDHCI_QUIRK_MULTIBLOCK_READ_ACMD12,
.quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN,
};
@ -549,6 +563,8 @@ static int sdhci_am654_probe(struct platform_device *pdev)
goto pm_runtime_put;
}
host->mmc_host_ops.execute_tuning = sdhci_am654_execute_tuning;
ret = sdhci_am654_init(host);
if (ret)
goto pm_runtime_put;

View File

@ -344,9 +344,16 @@ static void slcan_transmit(struct work_struct *work)
*/
static void slcan_write_wakeup(struct tty_struct *tty)
{
struct slcan *sl = tty->disc_data;
struct slcan *sl;
rcu_read_lock();
sl = rcu_dereference(tty->disc_data);
if (!sl)
goto out;
schedule_work(&sl->tx_work);
out:
rcu_read_unlock();
}
/* Send a can_frame to a TTY queue. */
@ -644,10 +651,11 @@ static void slcan_close(struct tty_struct *tty)
return;
spin_lock_bh(&sl->lock);
tty->disc_data = NULL;
rcu_assign_pointer(tty->disc_data, NULL);
sl->tty = NULL;
spin_unlock_bh(&sl->lock);
synchronize_rcu();
flush_work(&sl->tx_work);
/* Flush network side */

View File

@ -2157,8 +2157,8 @@ static void bcmgenet_init_tx_ring(struct bcmgenet_priv *priv,
DMA_END_ADDR);
/* Initialize Tx NAPI */
netif_napi_add(priv->dev, &ring->napi, bcmgenet_tx_poll,
NAPI_POLL_WEIGHT);
netif_tx_napi_add(priv->dev, &ring->napi, bcmgenet_tx_poll,
NAPI_POLL_WEIGHT);
}
/* Initialize a RDMA ring */

View File

@ -2448,6 +2448,8 @@ static int cxgb_extension_ioctl(struct net_device *dev, void __user *useraddr)
if (!is_offload(adapter))
return -EOPNOTSUPP;
if (!capable(CAP_NET_ADMIN))
return -EPERM;
if (!(adapter->flags & FULL_INIT_DONE))
return -EIO; /* need the memory controllers */
if (copy_from_user(&t, useraddr, sizeof(t)))

View File

@ -70,8 +70,7 @@ static void *seq_tab_start(struct seq_file *seq, loff_t *pos)
static void *seq_tab_next(struct seq_file *seq, void *v, loff_t *pos)
{
v = seq_tab_get_idx(seq->private, *pos + 1);
if (v)
++*pos;
++(*pos);
return v;
}

View File

@ -678,8 +678,7 @@ static void *l2t_seq_start(struct seq_file *seq, loff_t *pos)
static void *l2t_seq_next(struct seq_file *seq, void *v, loff_t *pos)
{
v = l2t_get_idx(seq, *pos);
if (v)
++*pos;
++(*pos);
return v;
}

View File

@ -110,7 +110,7 @@ do { \
/* Interface Mode Register (IF_MODE) */
#define IF_MODE_MASK 0x00000003 /* 30-31 Mask on i/f mode bits */
#define IF_MODE_XGMII 0x00000000 /* 30-31 XGMII (10G) interface */
#define IF_MODE_10G 0x00000000 /* 30-31 10G interface */
#define IF_MODE_GMII 0x00000002 /* 30-31 GMII (1G) interface */
#define IF_MODE_RGMII 0x00000004
#define IF_MODE_RGMII_AUTO 0x00008000
@ -440,7 +440,7 @@ static int init(struct memac_regs __iomem *regs, struct memac_cfg *cfg,
tmp = 0;
switch (phy_if) {
case PHY_INTERFACE_MODE_XGMII:
tmp |= IF_MODE_XGMII;
tmp |= IF_MODE_10G;
break;
default:
tmp |= IF_MODE_GMII;

View File

@ -49,6 +49,7 @@ struct tgec_mdio_controller {
struct mdio_fsl_priv {
struct tgec_mdio_controller __iomem *mdio_base;
bool is_little_endian;
bool has_a011043;
};
static u32 xgmac_read32(void __iomem *regs,
@ -226,7 +227,8 @@ static int xgmac_mdio_read(struct mii_bus *bus, int phy_id, int regnum)
return ret;
/* Return all Fs if nothing was there */
if (xgmac_read32(&regs->mdio_stat, endian) & MDIO_STAT_RD_ER) {
if ((xgmac_read32(&regs->mdio_stat, endian) & MDIO_STAT_RD_ER) &&
!priv->has_a011043) {
dev_err(&bus->dev,
"Error while reading PHY%d reg at %d.%hhu\n",
phy_id, dev_addr, regnum);
@ -274,6 +276,9 @@ static int xgmac_mdio_probe(struct platform_device *pdev)
priv->is_little_endian = of_property_read_bool(pdev->dev.of_node,
"little-endian");
priv->has_a011043 = of_property_read_bool(pdev->dev.of_node,
"fsl,erratum-a011043");
ret = of_mdiobus_register(bus, np);
if (ret) {
dev_err(&pdev->dev, "cannot register MDIO bus\n");

View File

@ -1113,7 +1113,7 @@ i40e_status i40e_read_pba_string(struct i40e_hw *hw, u8 *pba_num,
*/
pba_size--;
if (pba_num_size < (((u32)pba_size * 2) + 1)) {
hw_dbg(hw, "Buffer to small for PBA data.\n");
hw_dbg(hw, "Buffer too small for PBA data.\n");
return I40E_ERR_PARAM;
}

View File

@ -180,7 +180,7 @@ mlx5e_ktls_tx_post_param_wqes(struct mlx5e_txqsq *sq,
struct tx_sync_info {
u64 rcd_sn;
s32 sync_len;
u32 sync_len;
int nr_frags;
skb_frag_t frags[MAX_SKB_FRAGS];
};
@ -193,13 +193,14 @@ enum mlx5e_ktls_sync_retval {
static enum mlx5e_ktls_sync_retval
tx_sync_info_get(struct mlx5e_ktls_offload_context_tx *priv_tx,
u32 tcp_seq, struct tx_sync_info *info)
u32 tcp_seq, int datalen, struct tx_sync_info *info)
{
struct tls_offload_context_tx *tx_ctx = priv_tx->tx_ctx;
enum mlx5e_ktls_sync_retval ret = MLX5E_KTLS_SYNC_DONE;
struct tls_record_info *record;
int remaining, i = 0;
unsigned long flags;
bool ends_before;
spin_lock_irqsave(&tx_ctx->lock, flags);
record = tls_get_record(tx_ctx, tcp_seq, &info->rcd_sn);
@ -209,9 +210,21 @@ tx_sync_info_get(struct mlx5e_ktls_offload_context_tx *priv_tx,
goto out;
}
if (unlikely(tcp_seq < tls_record_start_seq(record))) {
ret = tls_record_is_start_marker(record) ?
MLX5E_KTLS_SYNC_SKIP_NO_DATA : MLX5E_KTLS_SYNC_FAIL;
/* There are the following cases:
* 1. packet ends before start marker: bypass offload.
* 2. packet starts before start marker and ends after it: drop,
* not supported, breaks contract with kernel.
* 3. packet ends before tls record info starts: drop,
* this packet was already acknowledged and its record info
* was released.
*/
ends_before = before(tcp_seq + datalen, tls_record_start_seq(record));
if (unlikely(tls_record_is_start_marker(record))) {
ret = ends_before ? MLX5E_KTLS_SYNC_SKIP_NO_DATA : MLX5E_KTLS_SYNC_FAIL;
goto out;
} else if (ends_before) {
ret = MLX5E_KTLS_SYNC_FAIL;
goto out;
}
@ -337,7 +350,7 @@ mlx5e_ktls_tx_handle_ooo(struct mlx5e_ktls_offload_context_tx *priv_tx,
u8 num_wqebbs;
int i = 0;
ret = tx_sync_info_get(priv_tx, seq, &info);
ret = tx_sync_info_get(priv_tx, seq, datalen, &info);
if (unlikely(ret != MLX5E_KTLS_SYNC_DONE)) {
if (ret == MLX5E_KTLS_SYNC_SKIP_NO_DATA) {
stats->tls_skip_no_sync_data++;
@ -351,14 +364,6 @@ mlx5e_ktls_tx_handle_ooo(struct mlx5e_ktls_offload_context_tx *priv_tx,
goto err_out;
}
if (unlikely(info.sync_len < 0)) {
if (likely(datalen <= -info.sync_len))
return MLX5E_KTLS_SYNC_DONE;
stats->tls_drop_bypass_req++;
goto err_out;
}
stats->tls_ooo++;
tx_post_resync_params(sq, priv_tx, info.rcd_sn);
@ -378,8 +383,6 @@ mlx5e_ktls_tx_handle_ooo(struct mlx5e_ktls_offload_context_tx *priv_tx,
if (unlikely(contig_wqebbs_room < num_wqebbs))
mlx5e_fill_sq_frag_edge(sq, wq, pi, contig_wqebbs_room);
tx_post_resync_params(sq, priv_tx, info.rcd_sn);
for (; i < info.nr_frags; i++) {
unsigned int orig_fsz, frag_offset = 0, n = 0;
skb_frag_t *f = &info.frags[i];
@ -455,12 +458,18 @@ struct sk_buff *mlx5e_ktls_handle_tx_skb(struct net_device *netdev,
enum mlx5e_ktls_sync_retval ret =
mlx5e_ktls_tx_handle_ooo(priv_tx, sq, datalen, seq);
if (likely(ret == MLX5E_KTLS_SYNC_DONE))
switch (ret) {
case MLX5E_KTLS_SYNC_DONE:
*wqe = mlx5e_sq_fetch_wqe(sq, sizeof(**wqe), pi);
else if (ret == MLX5E_KTLS_SYNC_FAIL)
break;
case MLX5E_KTLS_SYNC_SKIP_NO_DATA:
if (likely(!skb->decrypted))
goto out;
WARN_ON_ONCE(1);
/* fall-through */
default: /* MLX5E_KTLS_SYNC_FAIL */
goto err_out;
else /* ret == MLX5E_KTLS_SYNC_SKIP_NO_DATA */
goto out;
}
}
priv_tx->expected_seq = seq + datalen;

View File

@ -4084,6 +4084,13 @@ static int apply_police_params(struct mlx5e_priv *priv, u32 rate,
u32 rate_mbps;
int err;
vport_num = rpriv->rep->vport;
if (vport_num >= MLX5_VPORT_ECPF) {
NL_SET_ERR_MSG_MOD(extack,
"Ingress rate limit is supported only for Eswitch ports connected to VFs");
return -EOPNOTSUPP;
}
esw = priv->mdev->priv.eswitch;
/* rate is given in bytes/sec.
* First convert to bits/sec and then round to the nearest mbit/secs.
@ -4092,8 +4099,6 @@ static int apply_police_params(struct mlx5e_priv *priv, u32 rate,
* 1 mbit/sec.
*/
rate_mbps = rate ? max_t(u32, (rate * 8 + 500000) / 1000000, 1) : 0;
vport_num = rpriv->rep->vport;
err = mlx5_esw_modify_vport_rate(esw, vport_num, rate_mbps);
if (err)
NL_SET_ERR_MSG_MOD(extack, "failed applying action to hardware");

View File

@ -1931,8 +1931,10 @@ static void mlx5_eswitch_clear_vf_vports_info(struct mlx5_eswitch *esw)
struct mlx5_vport *vport;
int i;
mlx5_esw_for_each_vf_vport(esw, i, vport, esw->esw_funcs.num_vfs)
mlx5_esw_for_each_vf_vport(esw, i, vport, esw->esw_funcs.num_vfs) {
memset(&vport->info, 0, sizeof(vport->info));
vport->info.link_state = MLX5_VPORT_ADMIN_STATE_AUTO;
}
}
/* Public E-Switch API */

View File

@ -1172,7 +1172,7 @@ static int esw_offloads_start(struct mlx5_eswitch *esw,
return -EINVAL;
}
mlx5_eswitch_disable(esw, false);
mlx5_eswitch_disable(esw, true);
mlx5_eswitch_update_num_of_vfs(esw, esw->dev->priv.sriov.num_vfs);
err = mlx5_eswitch_enable(esw, MLX5_ESWITCH_OFFLOADS);
if (err) {
@ -2014,7 +2014,8 @@ int mlx5_esw_funcs_changed_handler(struct notifier_block *nb, unsigned long type
int esw_offloads_enable(struct mlx5_eswitch *esw)
{
int err;
struct mlx5_vport *vport;
int err, i;
if (MLX5_CAP_ESW_FLOWTABLE_FDB(esw->dev, reformat) &&
MLX5_CAP_ESW_FLOWTABLE_FDB(esw->dev, decap))
@ -2031,6 +2032,10 @@ int esw_offloads_enable(struct mlx5_eswitch *esw)
if (err)
goto err_vport_metadata;
/* Representor will control the vport link state */
mlx5_esw_for_each_vf_vport(esw, i, vport, esw->esw_funcs.num_vfs)
vport->info.link_state = MLX5_VPORT_ADMIN_STATE_DOWN;
err = mlx5_eswitch_enable_pf_vf_vports(esw, MLX5_VPORT_UC_ADDR_CHANGE);
if (err)
goto err_vports;
@ -2060,7 +2065,7 @@ static int esw_offloads_stop(struct mlx5_eswitch *esw,
{
int err, err1;
mlx5_eswitch_disable(esw, false);
mlx5_eswitch_disable(esw, true);
err = mlx5_eswitch_enable(esw, MLX5_ESWITCH_LEGACY);
if (err) {
NL_SET_ERR_MSG_MOD(extack, "Failed setting eswitch to legacy");

View File

@ -1563,6 +1563,7 @@ static const struct pci_device_id mlx5_core_pci_table[] = {
{ PCI_VDEVICE(MELLANOX, 0x101d) }, /* ConnectX-6 Dx */
{ PCI_VDEVICE(MELLANOX, 0x101e), MLX5_PCI_DEV_IS_VF}, /* ConnectX Family mlx5Gen Virtual Function */
{ PCI_VDEVICE(MELLANOX, 0x101f) }, /* ConnectX-6 LX */
{ PCI_VDEVICE(MELLANOX, 0x1021) }, /* ConnectX-7 */
{ PCI_VDEVICE(MELLANOX, 0xa2d2) }, /* BlueField integrated ConnectX-5 network controller */
{ PCI_VDEVICE(MELLANOX, 0xa2d3), MLX5_PCI_DEV_IS_VF}, /* BlueField integrated ConnectX-5 network controller VF */
{ PCI_VDEVICE(MELLANOX, 0xa2d6) }, /* BlueField-2 integrated ConnectX-6 Dx network controller */

View File

@ -1,6 +1,7 @@
// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
/* Copyright (c) 2019 Mellanox Technologies. */
#include <linux/smp.h>
#include "dr_types.h"
#define QUEUE_SIZE 128
@ -729,7 +730,7 @@ static struct mlx5dr_cq *dr_create_cq(struct mlx5_core_dev *mdev,
if (!in)
goto err_cqwq;
vector = smp_processor_id() % mlx5_comp_vectors_count(mdev);
vector = raw_smp_processor_id() % mlx5_comp_vectors_count(mdev);
err = mlx5_vector2eqn(mdev, vector, &eqn, &irqn);
if (err) {
kvfree(in);

View File

@ -379,7 +379,6 @@ static int mlx5_cmd_dr_create_fte(struct mlx5_flow_root_namespace *ns,
if (fte->action.action & MLX5_FLOW_CONTEXT_ACTION_FWD_DEST) {
list_for_each_entry(dst, &fte->node.children, node.list) {
enum mlx5_flow_destination_type type = dst->dest_attr.type;
u32 id;
if (num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX ||
num_term_actions >= MLX5_FLOW_CONTEXT_ACTION_MAX) {
@ -387,19 +386,10 @@ static int mlx5_cmd_dr_create_fte(struct mlx5_flow_root_namespace *ns,
goto free_actions;
}
switch (type) {
case MLX5_FLOW_DESTINATION_TYPE_COUNTER:
id = dst->dest_attr.counter_id;
if (type == MLX5_FLOW_DESTINATION_TYPE_COUNTER)
continue;
tmp_action =
mlx5dr_action_create_flow_counter(id);
if (!tmp_action) {
err = -ENOMEM;
goto free_actions;
}
fs_dr_actions[fs_dr_num_actions++] = tmp_action;
actions[num_actions++] = tmp_action;
break;
switch (type) {
case MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE:
tmp_action = create_ft_action(domain, dst);
if (!tmp_action) {
@ -432,6 +422,32 @@ static int mlx5_cmd_dr_create_fte(struct mlx5_flow_root_namespace *ns,
}
}
if (fte->action.action & MLX5_FLOW_CONTEXT_ACTION_COUNT) {
list_for_each_entry(dst, &fte->node.children, node.list) {
u32 id;
if (dst->dest_attr.type !=
MLX5_FLOW_DESTINATION_TYPE_COUNTER)
continue;
if (num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX) {
err = -ENOSPC;
goto free_actions;
}
id = dst->dest_attr.counter_id;
tmp_action =
mlx5dr_action_create_flow_counter(id);
if (!tmp_action) {
err = -ENOMEM;
goto free_actions;
}
fs_dr_actions[fs_dr_num_actions++] = tmp_action;
actions[num_actions++] = tmp_action;
}
}
params.match_sz = match_sz;
params.match_buf = (u64 *)fte->val;
if (num_term_actions == 1) {

View File

@ -8,6 +8,7 @@
#include <linux/string.h>
#include <linux/rhashtable.h>
#include <linux/netdevice.h>
#include <linux/mutex.h>
#include <net/net_namespace.h>
#include <net/tc_act/tc_vlan.h>
@ -25,6 +26,7 @@ struct mlxsw_sp_acl {
struct mlxsw_sp_fid *dummy_fid;
struct rhashtable ruleset_ht;
struct list_head rules;
struct mutex rules_lock; /* Protects rules list */
struct {
struct delayed_work dw;
unsigned long interval; /* ms */
@ -701,7 +703,9 @@ int mlxsw_sp_acl_rule_add(struct mlxsw_sp *mlxsw_sp,
goto err_ruleset_block_bind;
}
mutex_lock(&mlxsw_sp->acl->rules_lock);
list_add_tail(&rule->list, &mlxsw_sp->acl->rules);
mutex_unlock(&mlxsw_sp->acl->rules_lock);
block->rule_count++;
block->egress_blocker_rule_count += rule->rulei->egress_bind_blocker;
return 0;
@ -723,7 +727,9 @@ void mlxsw_sp_acl_rule_del(struct mlxsw_sp *mlxsw_sp,
block->egress_blocker_rule_count -= rule->rulei->egress_bind_blocker;
ruleset->ht_key.block->rule_count--;
mutex_lock(&mlxsw_sp->acl->rules_lock);
list_del(&rule->list);
mutex_unlock(&mlxsw_sp->acl->rules_lock);
if (!ruleset->ht_key.chain_index &&
mlxsw_sp_acl_ruleset_is_singular(ruleset))
mlxsw_sp_acl_ruleset_block_unbind(mlxsw_sp, ruleset,
@ -783,19 +789,18 @@ static int mlxsw_sp_acl_rules_activity_update(struct mlxsw_sp_acl *acl)
struct mlxsw_sp_acl_rule *rule;
int err;
/* Protect internal structures from changes */
rtnl_lock();
mutex_lock(&acl->rules_lock);
list_for_each_entry(rule, &acl->rules, list) {
err = mlxsw_sp_acl_rule_activity_update(acl->mlxsw_sp,
rule);
if (err)
goto err_rule_update;
}
rtnl_unlock();
mutex_unlock(&acl->rules_lock);
return 0;
err_rule_update:
rtnl_unlock();
mutex_unlock(&acl->rules_lock);
return err;
}
@ -880,6 +885,7 @@ int mlxsw_sp_acl_init(struct mlxsw_sp *mlxsw_sp)
acl->dummy_fid = fid;
INIT_LIST_HEAD(&acl->rules);
mutex_init(&acl->rules_lock);
err = mlxsw_sp_acl_tcam_init(mlxsw_sp, &acl->tcam);
if (err)
goto err_acl_ops_init;
@ -892,6 +898,7 @@ int mlxsw_sp_acl_init(struct mlxsw_sp *mlxsw_sp)
return 0;
err_acl_ops_init:
mutex_destroy(&acl->rules_lock);
mlxsw_sp_fid_put(fid);
err_fid_get:
rhashtable_destroy(&acl->ruleset_ht);
@ -908,6 +915,7 @@ void mlxsw_sp_acl_fini(struct mlxsw_sp *mlxsw_sp)
cancel_delayed_work_sync(&mlxsw_sp->acl->rule_activity_update.dw);
mlxsw_sp_acl_tcam_fini(mlxsw_sp, &acl->tcam);
mutex_destroy(&acl->rules_lock);
WARN_ON(!list_empty(&acl->rules));
mlxsw_sp_fid_put(acl->dummy_fid);
rhashtable_destroy(&acl->ruleset_ht);

View File

@ -64,6 +64,8 @@ static int sonic_open(struct net_device *dev)
netif_dbg(lp, ifup, dev, "%s: initializing sonic driver\n", __func__);
spin_lock_init(&lp->lock);
for (i = 0; i < SONIC_NUM_RRS; i++) {
struct sk_buff *skb = netdev_alloc_skb(dev, SONIC_RBSIZE + 2);
if (skb == NULL) {
@ -114,6 +116,24 @@ static int sonic_open(struct net_device *dev)
return 0;
}
/* Wait for the SONIC to become idle. */
static void sonic_quiesce(struct net_device *dev, u16 mask)
{
struct sonic_local * __maybe_unused lp = netdev_priv(dev);
int i;
u16 bits;
for (i = 0; i < 1000; ++i) {
bits = SONIC_READ(SONIC_CMD) & mask;
if (!bits)
return;
if (irqs_disabled() || in_interrupt())
udelay(20);
else
usleep_range(100, 200);
}
WARN_ONCE(1, "command deadline expired! 0x%04x\n", bits);
}
/*
* Close the SONIC device
@ -130,6 +150,9 @@ static int sonic_close(struct net_device *dev)
/*
* stop the SONIC, disable interrupts
*/
SONIC_WRITE(SONIC_CMD, SONIC_CR_RXDIS);
sonic_quiesce(dev, SONIC_CR_ALL);
SONIC_WRITE(SONIC_IMR, 0);
SONIC_WRITE(SONIC_ISR, 0x7fff);
SONIC_WRITE(SONIC_CMD, SONIC_CR_RST);
@ -169,6 +192,9 @@ static void sonic_tx_timeout(struct net_device *dev, unsigned int txqueue)
* put the Sonic into software-reset mode and
* disable all interrupts before releasing DMA buffers
*/
SONIC_WRITE(SONIC_CMD, SONIC_CR_RXDIS);
sonic_quiesce(dev, SONIC_CR_ALL);
SONIC_WRITE(SONIC_IMR, 0);
SONIC_WRITE(SONIC_ISR, 0x7fff);
SONIC_WRITE(SONIC_CMD, SONIC_CR_RST);
@ -206,8 +232,6 @@ static void sonic_tx_timeout(struct net_device *dev, unsigned int txqueue)
* wake the tx queue
* Concurrently with all of this, the SONIC is potentially writing to
* the status flags of the TDs.
* Until some mutual exclusion is added, this code will not work with SMP. However,
* MIPS Jazz machines and m68k Macs were all uni-processor machines.
*/
static int sonic_send_packet(struct sk_buff *skb, struct net_device *dev)
@ -215,7 +239,8 @@ static int sonic_send_packet(struct sk_buff *skb, struct net_device *dev)
struct sonic_local *lp = netdev_priv(dev);
dma_addr_t laddr;
int length;
int entry = lp->next_tx;
int entry;
unsigned long flags;
netif_dbg(lp, tx_queued, dev, "%s: skb=%p\n", __func__, skb);
@ -237,6 +262,10 @@ static int sonic_send_packet(struct sk_buff *skb, struct net_device *dev)
return NETDEV_TX_OK;
}
spin_lock_irqsave(&lp->lock, flags);
entry = lp->next_tx;
sonic_tda_put(dev, entry, SONIC_TD_STATUS, 0); /* clear status */
sonic_tda_put(dev, entry, SONIC_TD_FRAG_COUNT, 1); /* single fragment */
sonic_tda_put(dev, entry, SONIC_TD_PKTSIZE, length); /* length of packet */
@ -246,10 +275,6 @@ static int sonic_send_packet(struct sk_buff *skb, struct net_device *dev)
sonic_tda_put(dev, entry, SONIC_TD_LINK,
sonic_tda_get(dev, entry, SONIC_TD_LINK) | SONIC_EOL);
/*
* Must set tx_skb[entry] only after clearing status, and
* before clearing EOL and before stopping queue
*/
wmb();
lp->tx_len[entry] = length;
lp->tx_laddr[entry] = laddr;
@ -272,6 +297,8 @@ static int sonic_send_packet(struct sk_buff *skb, struct net_device *dev)
SONIC_WRITE(SONIC_CMD, SONIC_CR_TXP);
spin_unlock_irqrestore(&lp->lock, flags);
return NETDEV_TX_OK;
}
@ -284,15 +311,28 @@ static irqreturn_t sonic_interrupt(int irq, void *dev_id)
struct net_device *dev = dev_id;
struct sonic_local *lp = netdev_priv(dev);
int status;
unsigned long flags;
/* The lock has two purposes. Firstly, it synchronizes sonic_interrupt()
* with sonic_send_packet() so that the two functions can share state.
* Secondly, it makes sonic_interrupt() re-entrant, as that is required
* by macsonic which must use two IRQs with different priority levels.
*/
spin_lock_irqsave(&lp->lock, flags);
status = SONIC_READ(SONIC_ISR) & SONIC_IMR_DEFAULT;
if (!status) {
spin_unlock_irqrestore(&lp->lock, flags);
if (!(status = SONIC_READ(SONIC_ISR) & SONIC_IMR_DEFAULT))
return IRQ_NONE;
}
do {
SONIC_WRITE(SONIC_ISR, status); /* clear the interrupt(s) */
if (status & SONIC_INT_PKTRX) {
netif_dbg(lp, intr, dev, "%s: packet rx\n", __func__);
sonic_rx(dev); /* got packet(s) */
SONIC_WRITE(SONIC_ISR, SONIC_INT_PKTRX); /* clear the interrupt */
}
if (status & SONIC_INT_TXDN) {
@ -300,11 +340,12 @@ static irqreturn_t sonic_interrupt(int irq, void *dev_id)
int td_status;
int freed_some = 0;
/* At this point, cur_tx is the index of a TD that is one of:
* unallocated/freed (status set & tx_skb[entry] clear)
* allocated and sent (status set & tx_skb[entry] set )
* allocated and not yet sent (status clear & tx_skb[entry] set )
* still being allocated by sonic_send_packet (status clear & tx_skb[entry] clear)
/* The state of a Transmit Descriptor may be inferred
* from { tx_skb[entry], td_status } as follows.
* { clear, clear } => the TD has never been used
* { set, clear } => the TD was handed to SONIC
* { set, set } => the TD was handed back
* { clear, set } => the TD is available for re-use
*/
netif_dbg(lp, intr, dev, "%s: tx done\n", __func__);
@ -313,18 +354,19 @@ static irqreturn_t sonic_interrupt(int irq, void *dev_id)
if ((td_status = sonic_tda_get(dev, entry, SONIC_TD_STATUS)) == 0)
break;
if (td_status & 0x0001) {
if (td_status & SONIC_TCR_PTX) {
lp->stats.tx_packets++;
lp->stats.tx_bytes += sonic_tda_get(dev, entry, SONIC_TD_PKTSIZE);
} else {
lp->stats.tx_errors++;
if (td_status & 0x0642)
if (td_status & (SONIC_TCR_EXD |
SONIC_TCR_EXC | SONIC_TCR_BCM))
lp->stats.tx_aborted_errors++;
if (td_status & 0x0180)
if (td_status &
(SONIC_TCR_NCRS | SONIC_TCR_CRLS))
lp->stats.tx_carrier_errors++;
if (td_status & 0x0020)
if (td_status & SONIC_TCR_OWC)
lp->stats.tx_window_errors++;
if (td_status & 0x0004)
if (td_status & SONIC_TCR_FU)
lp->stats.tx_fifo_errors++;
}
@ -346,7 +388,6 @@ static irqreturn_t sonic_interrupt(int irq, void *dev_id)
if (freed_some || lp->tx_skb[entry] == NULL)
netif_wake_queue(dev); /* The ring is no longer full */
lp->cur_tx = entry;
SONIC_WRITE(SONIC_ISR, SONIC_INT_TXDN); /* clear the interrupt */
}
/*
@ -355,42 +396,37 @@ static irqreturn_t sonic_interrupt(int irq, void *dev_id)
if (status & SONIC_INT_RFO) {
netif_dbg(lp, rx_err, dev, "%s: rx fifo overrun\n",
__func__);
lp->stats.rx_fifo_errors++;
SONIC_WRITE(SONIC_ISR, SONIC_INT_RFO); /* clear the interrupt */
}
if (status & SONIC_INT_RDE) {
netif_dbg(lp, rx_err, dev, "%s: rx descriptors exhausted\n",
__func__);
lp->stats.rx_dropped++;
SONIC_WRITE(SONIC_ISR, SONIC_INT_RDE); /* clear the interrupt */
}
if (status & SONIC_INT_RBAE) {
netif_dbg(lp, rx_err, dev, "%s: rx buffer area exceeded\n",
__func__);
lp->stats.rx_dropped++;
SONIC_WRITE(SONIC_ISR, SONIC_INT_RBAE); /* clear the interrupt */
}
/* counter overruns; all counters are 16bit wide */
if (status & SONIC_INT_FAE) {
if (status & SONIC_INT_FAE)
lp->stats.rx_frame_errors += 65536;
SONIC_WRITE(SONIC_ISR, SONIC_INT_FAE); /* clear the interrupt */
}
if (status & SONIC_INT_CRC) {
if (status & SONIC_INT_CRC)
lp->stats.rx_crc_errors += 65536;
SONIC_WRITE(SONIC_ISR, SONIC_INT_CRC); /* clear the interrupt */
}
if (status & SONIC_INT_MP) {
if (status & SONIC_INT_MP)
lp->stats.rx_missed_errors += 65536;
SONIC_WRITE(SONIC_ISR, SONIC_INT_MP); /* clear the interrupt */
}
/* transmit error */
if (status & SONIC_INT_TXER) {
if (SONIC_READ(SONIC_TCR) & SONIC_TCR_FU)
netif_dbg(lp, tx_err, dev, "%s: tx fifo underrun\n",
__func__);
SONIC_WRITE(SONIC_ISR, SONIC_INT_TXER); /* clear the interrupt */
u16 tcr = SONIC_READ(SONIC_TCR);
netif_dbg(lp, tx_err, dev, "%s: TXER intr, TCR %04x\n",
__func__, tcr);
if (tcr & (SONIC_TCR_EXD | SONIC_TCR_EXC |
SONIC_TCR_FU | SONIC_TCR_BCM)) {
/* Aborted transmission. Try again. */
netif_stop_queue(dev);
SONIC_WRITE(SONIC_CMD, SONIC_CR_TXP);
}
}
/* bus retry */
@ -400,107 +436,164 @@ static irqreturn_t sonic_interrupt(int irq, void *dev_id)
/* ... to help debug DMA problems causing endless interrupts. */
/* Bounce the eth interface to turn on the interrupt again. */
SONIC_WRITE(SONIC_IMR, 0);
SONIC_WRITE(SONIC_ISR, SONIC_INT_BR); /* clear the interrupt */
}
/* load CAM done */
if (status & SONIC_INT_LCD)
SONIC_WRITE(SONIC_ISR, SONIC_INT_LCD); /* clear the interrupt */
} while((status = SONIC_READ(SONIC_ISR) & SONIC_IMR_DEFAULT));
status = SONIC_READ(SONIC_ISR) & SONIC_IMR_DEFAULT;
} while (status);
spin_unlock_irqrestore(&lp->lock, flags);
return IRQ_HANDLED;
}
/* Return the array index corresponding to a given Receive Buffer pointer. */
static int index_from_addr(struct sonic_local *lp, dma_addr_t addr,
unsigned int last)
{
unsigned int i = last;
do {
i = (i + 1) & SONIC_RRS_MASK;
if (addr == lp->rx_laddr[i])
return i;
} while (i != last);
return -ENOENT;
}
/* Allocate and map a new skb to be used as a receive buffer. */
static bool sonic_alloc_rb(struct net_device *dev, struct sonic_local *lp,
struct sk_buff **new_skb, dma_addr_t *new_addr)
{
*new_skb = netdev_alloc_skb(dev, SONIC_RBSIZE + 2);
if (!*new_skb)
return false;
if (SONIC_BUS_SCALE(lp->dma_bitmode) == 2)
skb_reserve(*new_skb, 2);
*new_addr = dma_map_single(lp->device, skb_put(*new_skb, SONIC_RBSIZE),
SONIC_RBSIZE, DMA_FROM_DEVICE);
if (!*new_addr) {
dev_kfree_skb(*new_skb);
*new_skb = NULL;
return false;
}
return true;
}
/* Place a new receive resource in the Receive Resource Area and update RWP. */
static void sonic_update_rra(struct net_device *dev, struct sonic_local *lp,
dma_addr_t old_addr, dma_addr_t new_addr)
{
unsigned int entry = sonic_rr_entry(dev, SONIC_READ(SONIC_RWP));
unsigned int end = sonic_rr_entry(dev, SONIC_READ(SONIC_RRP));
u32 buf;
/* The resources in the range [RRP, RWP) belong to the SONIC. This loop
* scans the other resources in the RRA, those in the range [RWP, RRP).
*/
do {
buf = (sonic_rra_get(dev, entry, SONIC_RR_BUFADR_H) << 16) |
sonic_rra_get(dev, entry, SONIC_RR_BUFADR_L);
if (buf == old_addr)
break;
entry = (entry + 1) & SONIC_RRS_MASK;
} while (entry != end);
WARN_ONCE(buf != old_addr, "failed to find resource!\n");
sonic_rra_put(dev, entry, SONIC_RR_BUFADR_H, new_addr >> 16);
sonic_rra_put(dev, entry, SONIC_RR_BUFADR_L, new_addr & 0xffff);
entry = (entry + 1) & SONIC_RRS_MASK;
SONIC_WRITE(SONIC_RWP, sonic_rr_addr(dev, entry));
}
/*
* We have a good packet(s), pass it/them up the network stack.
*/
static void sonic_rx(struct net_device *dev)
{
struct sonic_local *lp = netdev_priv(dev);
int status;
int entry = lp->cur_rx;
int prev_entry = lp->eol_rx;
bool rbe = false;
while (sonic_rda_get(dev, entry, SONIC_RD_IN_USE) == 0) {
struct sk_buff *used_skb;
struct sk_buff *new_skb;
dma_addr_t new_laddr;
u16 bufadr_l;
u16 bufadr_h;
int pkt_len;
u16 status = sonic_rda_get(dev, entry, SONIC_RD_STATUS);
status = sonic_rda_get(dev, entry, SONIC_RD_STATUS);
if (status & SONIC_RCR_PRX) {
/* Malloc up new buffer. */
new_skb = netdev_alloc_skb(dev, SONIC_RBSIZE + 2);
if (new_skb == NULL) {
lp->stats.rx_dropped++;
break;
}
/* provide 16 byte IP header alignment unless DMA requires otherwise */
if(SONIC_BUS_SCALE(lp->dma_bitmode) == 2)
skb_reserve(new_skb, 2);
/* If the RD has LPKT set, the chip has finished with the RB */
if ((status & SONIC_RCR_PRX) && (status & SONIC_RCR_LPKT)) {
struct sk_buff *new_skb;
dma_addr_t new_laddr;
u32 addr = (sonic_rda_get(dev, entry,
SONIC_RD_PKTPTR_H) << 16) |
sonic_rda_get(dev, entry, SONIC_RD_PKTPTR_L);
int i = index_from_addr(lp, addr, entry);
new_laddr = dma_map_single(lp->device, skb_put(new_skb, SONIC_RBSIZE),
SONIC_RBSIZE, DMA_FROM_DEVICE);
if (!new_laddr) {
dev_kfree_skb(new_skb);
printk(KERN_ERR "%s: Failed to map rx buffer, dropping packet.\n", dev->name);
lp->stats.rx_dropped++;
if (i < 0) {
WARN_ONCE(1, "failed to find buffer!\n");
break;
}
/* now we have a new skb to replace it, pass the used one up the stack */
dma_unmap_single(lp->device, lp->rx_laddr[entry], SONIC_RBSIZE, DMA_FROM_DEVICE);
used_skb = lp->rx_skb[entry];
pkt_len = sonic_rda_get(dev, entry, SONIC_RD_PKTLEN);
skb_trim(used_skb, pkt_len);
used_skb->protocol = eth_type_trans(used_skb, dev);
netif_rx(used_skb);
lp->stats.rx_packets++;
lp->stats.rx_bytes += pkt_len;
if (sonic_alloc_rb(dev, lp, &new_skb, &new_laddr)) {
struct sk_buff *used_skb = lp->rx_skb[i];
int pkt_len;
/* and insert the new skb */
lp->rx_laddr[entry] = new_laddr;
lp->rx_skb[entry] = new_skb;
/* Pass the used buffer up the stack */
dma_unmap_single(lp->device, addr, SONIC_RBSIZE,
DMA_FROM_DEVICE);
bufadr_l = (unsigned long)new_laddr & 0xffff;
bufadr_h = (unsigned long)new_laddr >> 16;
sonic_rra_put(dev, entry, SONIC_RR_BUFADR_L, bufadr_l);
sonic_rra_put(dev, entry, SONIC_RR_BUFADR_H, bufadr_h);
} else {
/* This should only happen, if we enable accepting broken packets. */
lp->stats.rx_errors++;
if (status & SONIC_RCR_FAER)
lp->stats.rx_frame_errors++;
if (status & SONIC_RCR_CRCR)
lp->stats.rx_crc_errors++;
}
if (status & SONIC_RCR_LPKT) {
/*
* this was the last packet out of the current receive buffer
* give the buffer back to the SONIC
pkt_len = sonic_rda_get(dev, entry,
SONIC_RD_PKTLEN);
skb_trim(used_skb, pkt_len);
used_skb->protocol = eth_type_trans(used_skb,
dev);
netif_rx(used_skb);
lp->stats.rx_packets++;
lp->stats.rx_bytes += pkt_len;
lp->rx_skb[i] = new_skb;
lp->rx_laddr[i] = new_laddr;
} else {
/* Failed to obtain a new buffer so re-use it */
new_laddr = addr;
lp->stats.rx_dropped++;
}
/* If RBE is already asserted when RWP advances then
* it's safe to clear RBE after processing this packet.
*/
lp->cur_rwp += SIZEOF_SONIC_RR * SONIC_BUS_SCALE(lp->dma_bitmode);
if (lp->cur_rwp >= lp->rra_end) lp->cur_rwp = lp->rra_laddr & 0xffff;
SONIC_WRITE(SONIC_RWP, lp->cur_rwp);
if (SONIC_READ(SONIC_ISR) & SONIC_INT_RBE) {
netif_dbg(lp, rx_err, dev, "%s: rx buffer exhausted\n",
__func__);
SONIC_WRITE(SONIC_ISR, SONIC_INT_RBE); /* clear the flag */
}
} else
printk(KERN_ERR "%s: rx desc without RCR_LPKT. Shouldn't happen !?\n",
dev->name);
rbe = rbe || SONIC_READ(SONIC_ISR) & SONIC_INT_RBE;
sonic_update_rra(dev, lp, addr, new_laddr);
}
/*
* give back the descriptor
*/
sonic_rda_put(dev, entry, SONIC_RD_LINK,
sonic_rda_get(dev, entry, SONIC_RD_LINK) | SONIC_EOL);
sonic_rda_put(dev, entry, SONIC_RD_STATUS, 0);
sonic_rda_put(dev, entry, SONIC_RD_IN_USE, 1);
sonic_rda_put(dev, lp->eol_rx, SONIC_RD_LINK,
sonic_rda_get(dev, lp->eol_rx, SONIC_RD_LINK) & ~SONIC_EOL);
lp->eol_rx = entry;
lp->cur_rx = entry = (entry + 1) & SONIC_RDS_MASK;
prev_entry = entry;
entry = (entry + 1) & SONIC_RDS_MASK;
}
lp->cur_rx = entry;
if (prev_entry != lp->eol_rx) {
/* Advance the EOL flag to put descriptors back into service */
sonic_rda_put(dev, prev_entry, SONIC_RD_LINK, SONIC_EOL |
sonic_rda_get(dev, prev_entry, SONIC_RD_LINK));
sonic_rda_put(dev, lp->eol_rx, SONIC_RD_LINK, ~SONIC_EOL &
sonic_rda_get(dev, lp->eol_rx, SONIC_RD_LINK));
lp->eol_rx = prev_entry;
}
if (rbe)
SONIC_WRITE(SONIC_ISR, SONIC_INT_RBE);
/*
* If any worth-while packets have been received, netif_rx()
* has done a mark_bh(NET_BH) for us and will work on them
@ -550,6 +643,8 @@ static void sonic_multicast_list(struct net_device *dev)
(netdev_mc_count(dev) > 15)) {
rcr |= SONIC_RCR_AMC;
} else {
unsigned long flags;
netif_dbg(lp, ifup, dev, "%s: mc_count %d\n", __func__,
netdev_mc_count(dev));
sonic_set_cam_enable(dev, 1); /* always enable our own address */
@ -563,9 +658,14 @@ static void sonic_multicast_list(struct net_device *dev)
i++;
}
SONIC_WRITE(SONIC_CDC, 16);
/* issue Load CAM command */
SONIC_WRITE(SONIC_CDP, lp->cda_laddr & 0xffff);
/* LCAM and TXP commands can't be used simultaneously */
spin_lock_irqsave(&lp->lock, flags);
sonic_quiesce(dev, SONIC_CR_TXP);
SONIC_WRITE(SONIC_CMD, SONIC_CR_LCAM);
sonic_quiesce(dev, SONIC_CR_LCAM);
spin_unlock_irqrestore(&lp->lock, flags);
}
}
@ -580,7 +680,6 @@ static void sonic_multicast_list(struct net_device *dev)
*/
static int sonic_init(struct net_device *dev)
{
unsigned int cmd;
struct sonic_local *lp = netdev_priv(dev);
int i;
@ -592,12 +691,16 @@ static int sonic_init(struct net_device *dev)
SONIC_WRITE(SONIC_ISR, 0x7fff);
SONIC_WRITE(SONIC_CMD, SONIC_CR_RST);
/* While in reset mode, clear CAM Enable register */
SONIC_WRITE(SONIC_CE, 0);
/*
* clear software reset flag, disable receiver, clear and
* enable interrupts, then completely initialize the SONIC
*/
SONIC_WRITE(SONIC_CMD, 0);
SONIC_WRITE(SONIC_CMD, SONIC_CR_RXDIS);
SONIC_WRITE(SONIC_CMD, SONIC_CR_RXDIS | SONIC_CR_STP);
sonic_quiesce(dev, SONIC_CR_ALL);
/*
* initialize the receive resource area
@ -615,15 +718,10 @@ static int sonic_init(struct net_device *dev)
}
/* initialize all RRA registers */
lp->rra_end = (lp->rra_laddr + SONIC_NUM_RRS * SIZEOF_SONIC_RR *
SONIC_BUS_SCALE(lp->dma_bitmode)) & 0xffff;
lp->cur_rwp = (lp->rra_laddr + (SONIC_NUM_RRS - 1) * SIZEOF_SONIC_RR *
SONIC_BUS_SCALE(lp->dma_bitmode)) & 0xffff;
SONIC_WRITE(SONIC_RSA, lp->rra_laddr & 0xffff);
SONIC_WRITE(SONIC_REA, lp->rra_end);
SONIC_WRITE(SONIC_RRP, lp->rra_laddr & 0xffff);
SONIC_WRITE(SONIC_RWP, lp->cur_rwp);
SONIC_WRITE(SONIC_RSA, sonic_rr_addr(dev, 0));
SONIC_WRITE(SONIC_REA, sonic_rr_addr(dev, SONIC_NUM_RRS));
SONIC_WRITE(SONIC_RRP, sonic_rr_addr(dev, 0));
SONIC_WRITE(SONIC_RWP, sonic_rr_addr(dev, SONIC_NUM_RRS - 1));
SONIC_WRITE(SONIC_URRA, lp->rra_laddr >> 16);
SONIC_WRITE(SONIC_EOBC, (SONIC_RBSIZE >> 1) - (lp->dma_bitmode ? 2 : 1));
@ -631,14 +729,7 @@ static int sonic_init(struct net_device *dev)
netif_dbg(lp, ifup, dev, "%s: issuing RRRA command\n", __func__);
SONIC_WRITE(SONIC_CMD, SONIC_CR_RRRA);
i = 0;
while (i++ < 100) {
if (SONIC_READ(SONIC_CMD) & SONIC_CR_RRRA)
break;
}
netif_dbg(lp, ifup, dev, "%s: status=%x, i=%d\n", __func__,
SONIC_READ(SONIC_CMD), i);
sonic_quiesce(dev, SONIC_CR_RRRA);
/*
* Initialize the receive descriptors so that they
@ -713,28 +804,17 @@ static int sonic_init(struct net_device *dev)
* load the CAM
*/
SONIC_WRITE(SONIC_CMD, SONIC_CR_LCAM);
i = 0;
while (i++ < 100) {
if (SONIC_READ(SONIC_ISR) & SONIC_INT_LCD)
break;
}
netif_dbg(lp, ifup, dev, "%s: CMD=%x, ISR=%x, i=%d\n", __func__,
SONIC_READ(SONIC_CMD), SONIC_READ(SONIC_ISR), i);
sonic_quiesce(dev, SONIC_CR_LCAM);
/*
* enable receiver, disable loopback
* and enable all interrupts
*/
SONIC_WRITE(SONIC_CMD, SONIC_CR_RXEN | SONIC_CR_STP);
SONIC_WRITE(SONIC_RCR, SONIC_RCR_DEFAULT);
SONIC_WRITE(SONIC_TCR, SONIC_TCR_DEFAULT);
SONIC_WRITE(SONIC_ISR, 0x7fff);
SONIC_WRITE(SONIC_IMR, SONIC_IMR_DEFAULT);
cmd = SONIC_READ(SONIC_CMD);
if ((cmd & SONIC_CR_RXEN) == 0 || (cmd & SONIC_CR_STP) == 0)
printk(KERN_ERR "sonic_init: failed, status=%x\n", cmd);
SONIC_WRITE(SONIC_CMD, SONIC_CR_RXEN);
netif_dbg(lp, ifup, dev, "%s: new status=%x\n", __func__,
SONIC_READ(SONIC_CMD));

View File

@ -110,6 +110,9 @@
#define SONIC_CR_TXP 0x0002
#define SONIC_CR_HTX 0x0001
#define SONIC_CR_ALL (SONIC_CR_LCAM | SONIC_CR_RRRA | \
SONIC_CR_RXEN | SONIC_CR_TXP)
/*
* SONIC data configuration bits
*/
@ -175,6 +178,7 @@
#define SONIC_TCR_NCRS 0x0100
#define SONIC_TCR_CRLS 0x0080
#define SONIC_TCR_EXC 0x0040
#define SONIC_TCR_OWC 0x0020
#define SONIC_TCR_PMB 0x0008
#define SONIC_TCR_FU 0x0004
#define SONIC_TCR_BCM 0x0002
@ -274,8 +278,9 @@
#define SONIC_NUM_RDS SONIC_NUM_RRS /* number of receive descriptors */
#define SONIC_NUM_TDS 16 /* number of transmit descriptors */
#define SONIC_RDS_MASK (SONIC_NUM_RDS-1)
#define SONIC_TDS_MASK (SONIC_NUM_TDS-1)
#define SONIC_RRS_MASK (SONIC_NUM_RRS - 1)
#define SONIC_RDS_MASK (SONIC_NUM_RDS - 1)
#define SONIC_TDS_MASK (SONIC_NUM_TDS - 1)
#define SONIC_RBSIZE 1520 /* size of one resource buffer */
@ -312,8 +317,6 @@ struct sonic_local {
u32 rda_laddr; /* logical DMA address of RDA */
dma_addr_t rx_laddr[SONIC_NUM_RRS]; /* logical DMA addresses of rx skbuffs */
dma_addr_t tx_laddr[SONIC_NUM_TDS]; /* logical DMA addresses of tx skbuffs */
unsigned int rra_end;
unsigned int cur_rwp;
unsigned int cur_rx;
unsigned int cur_tx; /* first unacked transmit packet */
unsigned int eol_rx;
@ -322,6 +325,7 @@ struct sonic_local {
int msg_enable;
struct device *device; /* generic device */
struct net_device_stats stats;
spinlock_t lock;
};
#define TX_TIMEOUT (3 * HZ)
@ -344,30 +348,30 @@ static void sonic_msg_init(struct net_device *dev);
as far as we can tell. */
/* OpenBSD calls this "SWO". I'd like to think that sonic_buf_put()
is a much better name. */
static inline void sonic_buf_put(void* base, int bitmode,
static inline void sonic_buf_put(u16 *base, int bitmode,
int offset, __u16 val)
{
if (bitmode)
#ifdef __BIG_ENDIAN
((__u16 *) base + (offset*2))[1] = val;
__raw_writew(val, base + (offset * 2) + 1);
#else
((__u16 *) base + (offset*2))[0] = val;
__raw_writew(val, base + (offset * 2) + 0);
#endif
else
((__u16 *) base)[offset] = val;
__raw_writew(val, base + (offset * 1) + 0);
}
static inline __u16 sonic_buf_get(void* base, int bitmode,
static inline __u16 sonic_buf_get(u16 *base, int bitmode,
int offset)
{
if (bitmode)
#ifdef __BIG_ENDIAN
return ((volatile __u16 *) base + (offset*2))[1];
return __raw_readw(base + (offset * 2) + 1);
#else
return ((volatile __u16 *) base + (offset*2))[0];
return __raw_readw(base + (offset * 2) + 0);
#endif
else
return ((volatile __u16 *) base)[offset];
return __raw_readw(base + (offset * 1) + 0);
}
/* Inlines that you should actually use for reading/writing DMA buffers */
@ -447,6 +451,22 @@ static inline __u16 sonic_rra_get(struct net_device* dev, int entry,
(entry * SIZEOF_SONIC_RR) + offset);
}
static inline u16 sonic_rr_addr(struct net_device *dev, int entry)
{
struct sonic_local *lp = netdev_priv(dev);
return lp->rra_laddr +
entry * SIZEOF_SONIC_RR * SONIC_BUS_SCALE(lp->dma_bitmode);
}
static inline u16 sonic_rr_entry(struct net_device *dev, u16 addr)
{
struct sonic_local *lp = netdev_priv(dev);
return (addr - (u16)lp->rra_laddr) / (SIZEOF_SONIC_RR *
SONIC_BUS_SCALE(lp->dma_bitmode));
}
static const char version[] =
"sonic.c:v0.92 20.9.98 tsbogend@alpha.franken.de\n";

View File

@ -2043,6 +2043,7 @@ static void qlcnic_83xx_exec_template_cmd(struct qlcnic_adapter *p_dev,
break;
}
entry += p_hdr->size;
cond_resched();
}
p_dev->ahw->reset.seq_index = index;
}

View File

@ -703,6 +703,7 @@ static u32 qlcnic_read_memory_test_agent(struct qlcnic_adapter *adapter,
addr += 16;
reg_read -= 16;
ret += 16;
cond_resched();
}
out:
mutex_unlock(&adapter->ahw->mem_lock);
@ -1383,6 +1384,7 @@ int qlcnic_dump_fw(struct qlcnic_adapter *adapter)
buf_offset += entry->hdr.cap_size;
entry_offset += entry->hdr.offset;
buffer = fw_dump->data + buf_offset;
cond_resched();
}
fw_dump->clr = 1;

View File

@ -412,9 +412,9 @@ stmmac_probe_config_dt(struct platform_device *pdev, const char **mac)
*mac = NULL;
}
rc = of_get_phy_mode(np, &plat->phy_interface);
if (rc)
return ERR_PTR(rc);
plat->phy_interface = device_get_phy_mode(&pdev->dev);
if (plat->phy_interface < 0)
return ERR_PTR(plat->phy_interface);
plat->interface = stmmac_of_get_mac_mode(np);
if (plat->interface < 0)

View File

@ -804,19 +804,21 @@ static struct sock *gtp_encap_enable_socket(int fd, int type,
return NULL;
}
if (sock->sk->sk_protocol != IPPROTO_UDP) {
sk = sock->sk;
if (sk->sk_protocol != IPPROTO_UDP ||
sk->sk_type != SOCK_DGRAM ||
(sk->sk_family != AF_INET && sk->sk_family != AF_INET6)) {
pr_debug("socket fd=%d not UDP\n", fd);
sk = ERR_PTR(-EINVAL);
goto out_sock;
}
lock_sock(sock->sk);
if (sock->sk->sk_user_data) {
lock_sock(sk);
if (sk->sk_user_data) {
sk = ERR_PTR(-EBUSY);
goto out_rel_sock;
}
sk = sock->sk;
sock_hold(sk);
tuncfg.sk_user_data = gtp;

View File

@ -452,9 +452,16 @@ static void slip_transmit(struct work_struct *work)
*/
static void slip_write_wakeup(struct tty_struct *tty)
{
struct slip *sl = tty->disc_data;
struct slip *sl;
rcu_read_lock();
sl = rcu_dereference(tty->disc_data);
if (!sl)
goto out;
schedule_work(&sl->tx_work);
out:
rcu_read_unlock();
}
static void sl_tx_timeout(struct net_device *dev, unsigned int txqueue)
@ -882,10 +889,11 @@ static void slip_close(struct tty_struct *tty)
return;
spin_lock_bh(&sl->lock);
tty->disc_data = NULL;
rcu_assign_pointer(tty->disc_data, NULL);
sl->tty = NULL;
spin_unlock_bh(&sl->lock);
synchronize_rcu();
flush_work(&sl->tx_work);
/* VSV = very important to remove timers */

View File

@ -1936,6 +1936,10 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
if (ret != XDP_PASS) {
rcu_read_unlock();
local_bh_enable();
if (frags) {
tfile->napi.skb = NULL;
mutex_unlock(&tfile->napi_mutex);
}
return total_len;
}
}

View File

@ -20,6 +20,7 @@
#include <linux/mdio.h>
#include <linux/phy.h>
#include <net/ip6_checksum.h>
#include <net/vxlan.h>
#include <linux/interrupt.h>
#include <linux/irqdomain.h>
#include <linux/irq.h>
@ -3660,6 +3661,19 @@ static void lan78xx_tx_timeout(struct net_device *net, unsigned int txqueue)
tasklet_schedule(&dev->bh);
}
static netdev_features_t lan78xx_features_check(struct sk_buff *skb,
struct net_device *netdev,
netdev_features_t features)
{
if (skb->len + TX_OVERHEAD > MAX_SINGLE_PACKET_SIZE)
features &= ~NETIF_F_GSO_MASK;
features = vlan_features_check(skb, features);
features = vxlan_features_check(skb, features);
return features;
}
static const struct net_device_ops lan78xx_netdev_ops = {
.ndo_open = lan78xx_open,
.ndo_stop = lan78xx_stop,
@ -3673,6 +3687,7 @@ static const struct net_device_ops lan78xx_netdev_ops = {
.ndo_set_features = lan78xx_set_features,
.ndo_vlan_rx_add_vid = lan78xx_vlan_rx_add_vid,
.ndo_vlan_rx_kill_vid = lan78xx_vlan_rx_kill_vid,
.ndo_features_check = lan78xx_features_check,
};
static void lan78xx_stat_monitor(struct timer_list *t)

View File

@ -31,7 +31,7 @@
#define NETNEXT_VERSION "11"
/* Information for net */
#define NET_VERSION "10"
#define NET_VERSION "11"
#define DRIVER_VERSION "v1." NETNEXT_VERSION "." NET_VERSION
#define DRIVER_AUTHOR "Realtek linux nic maintainers <nic_swsd@realtek.com>"
@ -68,6 +68,7 @@
#define PLA_LED_FEATURE 0xdd92
#define PLA_PHYAR 0xde00
#define PLA_BOOT_CTRL 0xe004
#define PLA_LWAKE_CTRL_REG 0xe007
#define PLA_GPHY_INTR_IMR 0xe022
#define PLA_EEE_CR 0xe040
#define PLA_EEEP_CR 0xe080
@ -95,6 +96,7 @@
#define PLA_TALLYCNT 0xe890
#define PLA_SFF_STS_7 0xe8de
#define PLA_PHYSTATUS 0xe908
#define PLA_CONFIG6 0xe90a /* CONFIG6 */
#define PLA_BP_BA 0xfc26
#define PLA_BP_0 0xfc28
#define PLA_BP_1 0xfc2a
@ -107,6 +109,7 @@
#define PLA_BP_EN 0xfc38
#define USB_USB2PHY 0xb41e
#define USB_SSPHYLINK1 0xb426
#define USB_SSPHYLINK2 0xb428
#define USB_U2P3_CTRL 0xb460
#define USB_CSR_DUMMY1 0xb464
@ -300,6 +303,9 @@
#define LINK_ON_WAKE_EN 0x0010
#define LINK_OFF_WAKE_EN 0x0008
/* PLA_CONFIG6 */
#define LANWAKE_CLR_EN BIT(0)
/* PLA_CONFIG5 */
#define BWF_EN 0x0040
#define MWF_EN 0x0020
@ -312,6 +318,7 @@
/* PLA_PHY_PWR */
#define TX_10M_IDLE_EN 0x0080
#define PFM_PWM_SWITCH 0x0040
#define TEST_IO_OFF BIT(4)
/* PLA_MAC_PWR_CTRL */
#define D3_CLK_GATED_EN 0x00004000
@ -324,6 +331,7 @@
#define MAC_CLK_SPDWN_EN BIT(15)
/* PLA_MAC_PWR_CTRL3 */
#define PLA_MCU_SPDWN_EN BIT(14)
#define PKT_AVAIL_SPDWN_EN 0x0100
#define SUSPEND_SPDWN_EN 0x0004
#define U1U2_SPDWN_EN 0x0002
@ -354,6 +362,9 @@
/* PLA_BOOT_CTRL */
#define AUTOLOAD_DONE 0x0002
/* PLA_LWAKE_CTRL_REG */
#define LANWAKE_PIN BIT(7)
/* PLA_SUSPEND_FLAG */
#define LINK_CHG_EVENT BIT(0)
@ -365,13 +376,18 @@
#define DEBUG_LTSSM 0x0082
/* PLA_EXTRA_STATUS */
#define CUR_LINK_OK BIT(15)
#define U3P3_CHECK_EN BIT(7) /* RTL_VER_05 only */
#define LINK_CHANGE_FLAG BIT(8)
#define POLL_LINK_CHG BIT(0)
/* USB_USB2PHY */
#define USB2PHY_SUSPEND 0x0001
#define USB2PHY_L1 0x0002
/* USB_SSPHYLINK1 */
#define DELAY_PHY_PWR_CHG BIT(1)
/* USB_SSPHYLINK2 */
#define pwd_dn_scale_mask 0x3ffe
#define pwd_dn_scale(x) ((x) << 1)
@ -2861,6 +2877,17 @@ static int rtl8153_enable(struct r8152 *tp)
r8153_set_rx_early_timeout(tp);
r8153_set_rx_early_size(tp);
if (tp->version == RTL_VER_09) {
u32 ocp_data;
ocp_data = ocp_read_word(tp, MCU_TYPE_USB, USB_FW_TASK);
ocp_data &= ~FC_PATCH_TASK;
ocp_write_word(tp, MCU_TYPE_USB, USB_FW_TASK, ocp_data);
usleep_range(1000, 2000);
ocp_data |= FC_PATCH_TASK;
ocp_write_word(tp, MCU_TYPE_USB, USB_FW_TASK, ocp_data);
}
return rtl_enable(tp);
}
@ -3374,8 +3401,8 @@ static void rtl8153b_runtime_enable(struct r8152 *tp, bool enable)
r8153b_ups_en(tp, false);
r8153_queue_wake(tp, false);
rtl_runtime_suspend_enable(tp, false);
r8153_u2p3en(tp, true);
r8153b_u1u2en(tp, true);
if (tp->udev->speed != USB_SPEED_HIGH)
r8153b_u1u2en(tp, true);
}
}
@ -4673,7 +4700,6 @@ static void r8153b_hw_phy_cfg(struct r8152 *tp)
r8153_aldps_en(tp, true);
r8152b_enable_fc(tp);
r8153_u2p3en(tp, true);
set_bit(PHY_RESET, &tp->flags);
}
@ -4952,6 +4978,8 @@ static void rtl8152_down(struct r8152 *tp)
static void rtl8153_up(struct r8152 *tp)
{
u32 ocp_data;
if (test_bit(RTL8152_UNPLUG, &tp->flags))
return;
@ -4959,6 +4987,19 @@ static void rtl8153_up(struct r8152 *tp)
r8153_u2p3en(tp, false);
r8153_aldps_en(tp, false);
r8153_first_init(tp);
ocp_data = ocp_read_byte(tp, MCU_TYPE_PLA, PLA_CONFIG6);
ocp_data |= LANWAKE_CLR_EN;
ocp_write_byte(tp, MCU_TYPE_PLA, PLA_CONFIG6, ocp_data);
ocp_data = ocp_read_byte(tp, MCU_TYPE_PLA, PLA_LWAKE_CTRL_REG);
ocp_data &= ~LANWAKE_PIN;
ocp_write_byte(tp, MCU_TYPE_PLA, PLA_LWAKE_CTRL_REG, ocp_data);
ocp_data = ocp_read_word(tp, MCU_TYPE_USB, USB_SSPHYLINK1);
ocp_data &= ~DELAY_PHY_PWR_CHG;
ocp_write_word(tp, MCU_TYPE_USB, USB_SSPHYLINK1, ocp_data);
r8153_aldps_en(tp, true);
switch (tp->version) {
@ -4977,11 +5018,17 @@ static void rtl8153_up(struct r8152 *tp)
static void rtl8153_down(struct r8152 *tp)
{
u32 ocp_data;
if (test_bit(RTL8152_UNPLUG, &tp->flags)) {
rtl_drop_queued_tx(tp);
return;
}
ocp_data = ocp_read_byte(tp, MCU_TYPE_PLA, PLA_CONFIG6);
ocp_data &= ~LANWAKE_CLR_EN;
ocp_write_byte(tp, MCU_TYPE_PLA, PLA_CONFIG6, ocp_data);
r8153_u1u2en(tp, false);
r8153_u2p3en(tp, false);
r8153_power_cut_en(tp, false);
@ -4992,6 +5039,8 @@ static void rtl8153_down(struct r8152 *tp)
static void rtl8153b_up(struct r8152 *tp)
{
u32 ocp_data;
if (test_bit(RTL8152_UNPLUG, &tp->flags))
return;
@ -5002,18 +5051,29 @@ static void rtl8153b_up(struct r8152 *tp)
r8153_first_init(tp);
ocp_write_dword(tp, MCU_TYPE_USB, USB_RX_BUF_TH, RX_THR_B);
ocp_data = ocp_read_word(tp, MCU_TYPE_PLA, PLA_MAC_PWR_CTRL3);
ocp_data &= ~PLA_MCU_SPDWN_EN;
ocp_write_word(tp, MCU_TYPE_PLA, PLA_MAC_PWR_CTRL3, ocp_data);
r8153_aldps_en(tp, true);
r8153_u2p3en(tp, true);
r8153b_u1u2en(tp, true);
if (tp->udev->speed != USB_SPEED_HIGH)
r8153b_u1u2en(tp, true);
}
static void rtl8153b_down(struct r8152 *tp)
{
u32 ocp_data;
if (test_bit(RTL8152_UNPLUG, &tp->flags)) {
rtl_drop_queued_tx(tp);
return;
}
ocp_data = ocp_read_word(tp, MCU_TYPE_PLA, PLA_MAC_PWR_CTRL3);
ocp_data |= PLA_MCU_SPDWN_EN;
ocp_write_word(tp, MCU_TYPE_PLA, PLA_MAC_PWR_CTRL3, ocp_data);
r8153b_u1u2en(tp, false);
r8153_u2p3en(tp, false);
r8153b_power_cut_en(tp, false);
@ -5385,6 +5445,16 @@ static void r8153_init(struct r8152 *tp)
else
ocp_data |= DYNAMIC_BURST;
ocp_write_byte(tp, MCU_TYPE_USB, USB_CSR_DUMMY1, ocp_data);
r8153_queue_wake(tp, false);
ocp_data = ocp_read_word(tp, MCU_TYPE_PLA, PLA_EXTRA_STATUS);
if (rtl8152_get_speed(tp) & LINK_STATUS)
ocp_data |= CUR_LINK_OK;
else
ocp_data &= ~CUR_LINK_OK;
ocp_data |= POLL_LINK_CHG;
ocp_write_word(tp, MCU_TYPE_PLA, PLA_EXTRA_STATUS, ocp_data);
}
ocp_data = ocp_read_byte(tp, MCU_TYPE_USB, USB_CSR_DUMMY2);
@ -5414,10 +5484,19 @@ static void r8153_init(struct r8152 *tp)
ocp_write_word(tp, MCU_TYPE_USB, USB_CONNECT_TIMER, 0x0001);
r8153_power_cut_en(tp, false);
rtl_runtime_suspend_enable(tp, false);
r8153_u1u2en(tp, true);
r8153_mac_clk_spd(tp, false);
usb_enable_lpm(tp->udev);
ocp_data = ocp_read_byte(tp, MCU_TYPE_PLA, PLA_CONFIG6);
ocp_data |= LANWAKE_CLR_EN;
ocp_write_byte(tp, MCU_TYPE_PLA, PLA_CONFIG6, ocp_data);
ocp_data = ocp_read_byte(tp, MCU_TYPE_PLA, PLA_LWAKE_CTRL_REG);
ocp_data &= ~LANWAKE_PIN;
ocp_write_byte(tp, MCU_TYPE_PLA, PLA_LWAKE_CTRL_REG, ocp_data);
/* rx aggregation */
ocp_data = ocp_read_word(tp, MCU_TYPE_USB, USB_USB_CTRL);
ocp_data &= ~(RX_AGG_DISABLE | RX_ZERO_EN);
@ -5482,7 +5561,17 @@ static void r8153b_init(struct r8152 *tp)
r8153b_ups_en(tp, false);
r8153_queue_wake(tp, false);
rtl_runtime_suspend_enable(tp, false);
r8153b_u1u2en(tp, true);
ocp_data = ocp_read_word(tp, MCU_TYPE_PLA, PLA_EXTRA_STATUS);
if (rtl8152_get_speed(tp) & LINK_STATUS)
ocp_data |= CUR_LINK_OK;
else
ocp_data &= ~CUR_LINK_OK;
ocp_data |= POLL_LINK_CHG;
ocp_write_word(tp, MCU_TYPE_PLA, PLA_EXTRA_STATUS, ocp_data);
if (tp->udev->speed != USB_SPEED_HIGH)
r8153b_u1u2en(tp, true);
usb_enable_lpm(tp->udev);
/* MAC clock speed down */
@ -5490,6 +5579,19 @@ static void r8153b_init(struct r8152 *tp)
ocp_data |= MAC_CLK_SPDWN_EN;
ocp_write_word(tp, MCU_TYPE_PLA, PLA_MAC_PWR_CTRL2, ocp_data);
ocp_data = ocp_read_word(tp, MCU_TYPE_PLA, PLA_MAC_PWR_CTRL3);
ocp_data &= ~PLA_MCU_SPDWN_EN;
ocp_write_word(tp, MCU_TYPE_PLA, PLA_MAC_PWR_CTRL3, ocp_data);
if (tp->version == RTL_VER_09) {
/* Disable Test IO for 32QFN */
if (ocp_read_byte(tp, MCU_TYPE_PLA, 0xdc00) & BIT(5)) {
ocp_data = ocp_read_word(tp, MCU_TYPE_PLA, PLA_PHY_PWR);
ocp_data |= TEST_IO_OFF;
ocp_write_word(tp, MCU_TYPE_PLA, PLA_PHY_PWR, ocp_data);
}
}
set_bit(GREEN_ETHERNET, &tp->flags);
/* rx aggregation */
@ -6705,6 +6807,11 @@ static int rtl8152_probe(struct usb_interface *intf,
intf->needs_remote_wakeup = 1;
if (!rtl_can_wakeup(tp))
__rtl_set_wol(tp, 0);
else
tp->saved_wolopts = __rtl_get_wol(tp);
tp->rtl_ops.init(tp);
#if IS_BUILTIN(CONFIG_USB_RTL8152)
/* Retry in case request_firmware() is not ready yet. */
@ -6722,10 +6829,6 @@ static int rtl8152_probe(struct usb_interface *intf,
goto out1;
}
if (!rtl_can_wakeup(tp))
__rtl_set_wol(tp, 0);
tp->saved_wolopts = __rtl_get_wol(tp);
if (tp->saved_wolopts)
device_set_wakeup_enable(&udev->dev, true);
else

View File

@ -7790,16 +7790,8 @@ static int readrids(struct net_device *dev, aironet_ioctl *comp) {
case AIROGVLIST: ridcode = RID_APLIST; break;
case AIROGDRVNAM: ridcode = RID_DRVNAME; break;
case AIROGEHTENC: ridcode = RID_ETHERENCAP; break;
case AIROGWEPKTMP: ridcode = RID_WEP_TEMP;
/* Only super-user can read WEP keys */
if (!capable(CAP_NET_ADMIN))
return -EPERM;
break;
case AIROGWEPKNV: ridcode = RID_WEP_PERM;
/* Only super-user can read WEP keys */
if (!capable(CAP_NET_ADMIN))
return -EPERM;
break;
case AIROGWEPKTMP: ridcode = RID_WEP_TEMP; break;
case AIROGWEPKNV: ridcode = RID_WEP_PERM; break;
case AIROGSTAT: ridcode = RID_STATUS; break;
case AIROGSTATSD32: ridcode = RID_STATSDELTA; break;
case AIROGSTATSC32: ridcode = RID_STATS; break;
@ -7813,7 +7805,13 @@ static int readrids(struct net_device *dev, aironet_ioctl *comp) {
return -EINVAL;
}
if ((iobuf = kmalloc(RIDSIZE, GFP_KERNEL)) == NULL)
if (ridcode == RID_WEP_TEMP || ridcode == RID_WEP_PERM) {
/* Only super-user can read WEP keys */
if (!capable(CAP_NET_ADMIN))
return -EPERM;
}
if ((iobuf = kzalloc(RIDSIZE, GFP_KERNEL)) == NULL)
return -ENOMEM;
PC4500_readrid(ai,ridcode,iobuf,RIDSIZE, 1);

View File

@ -267,7 +267,7 @@ int iwlagn_tx_skb(struct iwl_priv *priv,
struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
struct iwl_station_priv *sta_priv = NULL;
struct iwl_rxon_context *ctx = &priv->contexts[IWL_RXON_CTX_BSS];
struct iwl_device_cmd *dev_cmd;
struct iwl_device_tx_cmd *dev_cmd;
struct iwl_tx_cmd *tx_cmd;
__le16 fc;
u8 hdr_len;
@ -348,7 +348,6 @@ int iwlagn_tx_skb(struct iwl_priv *priv,
if (unlikely(!dev_cmd))
goto drop_unlock_priv;
memset(dev_cmd, 0, sizeof(*dev_cmd));
dev_cmd->hdr.cmd = REPLY_TX;
tx_cmd = (struct iwl_tx_cmd *) dev_cmd->payload;

View File

@ -357,8 +357,8 @@ int iwl_sar_get_ewrd_table(struct iwl_fw_runtime *fwrt)
{
union acpi_object *wifi_pkg, *data;
bool enabled;
int i, n_profiles, tbl_rev;
int ret = 0;
int i, n_profiles, tbl_rev, pos;
int ret = 0;
data = iwl_acpi_get_object(fwrt->dev, ACPI_EWRD_METHOD);
if (IS_ERR(data))
@ -390,10 +390,10 @@ int iwl_sar_get_ewrd_table(struct iwl_fw_runtime *fwrt)
goto out_free;
}
for (i = 0; i < n_profiles; i++) {
/* the tables start at element 3 */
int pos = 3;
/* the tables start at element 3 */
pos = 3;
for (i = 0; i < n_profiles; i++) {
/* The EWRD profiles officially go from 2 to 4, but we
* save them in sar_profiles[1-3] (because we don't
* have profile 0). So in the array we start from 1.

View File

@ -2669,12 +2669,7 @@ int iwl_fw_dbg_stop_restart_recording(struct iwl_fw_runtime *fwrt,
{
int ret = 0;
/* if the FW crashed or not debug monitor cfg was given, there is
* no point in changing the recording state
*/
if (test_bit(STATUS_FW_ERROR, &fwrt->trans->status) ||
(!fwrt->trans->dbg.dest_tlv &&
fwrt->trans->dbg.ini_dest == IWL_FW_INI_LOCATION_INVALID))
if (test_bit(STATUS_FW_ERROR, &fwrt->trans->status))
return 0;
if (fw_has_capa(&fwrt->fw->ucode_capa,

Some files were not shown because too many files have changed in this diff Show More