drm-misc-next for v5.9:

UAPI Changes:
 
 Cross-subsystem Changes:
 - Convert panel-dsi-cm and ingenic bindings to YAML.
 - Add lockdep annotations for dma-fence. \o/
 - Describe why indefinite fences are a bad idea
 - Update binding for rocktech jh057n00900.
 
 Core Changes:
 - Add vblank workers.
 - Use spin_(un)lock_irq instead of the irqsave/restore variants in crtc code.
 - Add managed vram helpers.
 - Convert more logging to drm functions.
 - Replace more http links with https in core and drivers.
 - Cleanup to ttm iomem functions and implementation.
 - Remove TTM CMA memtype as it doesn't work correctly.
 - Remove TTM_MEMTYPE_FLAG_MAPPABLE for many drivers that have no
   unmappable memory resources.
 
 Driver Changes:
 - Add CRC support to nouveau, using the new vblank workers.
 - Dithering and atomic state fix for nouveau.
 - Fixes for Frida FRD350H54004 panel.
 - Add support for OSD mode (sprite planes), IPU (scaling) and multiple
   panels/bridges to ingenic.
 - Use managed vram helpers in ast.
 - Assorted small fixes to ingenic, i810, mxsfb.
 - Remove optional unused ttm dummy functions.
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEuXvWqAysSYEJGuVH/lWMcqZwE8MFAl8YFw0ACgkQ/lWMcqZw
 E8MlixAAk+lI5qZ95xtZL8Evdg4c70wYYLuPKz5JPb/lTYoV0MciFEUCF5J2Df9N
 83oMB1fycCEe396fb0aQlzq7IzMV5RFF+Y4hrDSqq0m7prlK7EphVTmTlaSFovPW
 nQQTXBuET9LzgM7Dnhu4MPbD75IeFZ+pT+58yr3oUjki3r9bf0KIgy9QnkanDwl1
 nH1b2MCCqvVjgca8zk3NpD4H2FwOpLL87/SQmRINR9CshvuACZD6zRLoJuKrWGzs
 XZDVifhTib/ZfONr6tTWgsfv5d4IEifKml6XOV5OPy0K37u9tG0MmjJOm7pswq1/
 F8oyvNWbfP7IOgTeBvT3sDMgVv4v8rvHumYoL+J4v0Sg4Qpsro/KDX9aLQLT1SIA
 ZlHahSxW10H699UrV4Lr6DnW1caTaWLuvJyqmo838MNhskuEmKFWGxOPH8oGqcwW
 2/Hk8Ni+z2q7do+VWwezniy9k2d4AHF40B1ZjzWMR3dCQdz/sCHd7YY4fLOeGEF3
 C5zx6On9+S80iok4zfSATI/uVd+zwsngaqGsxZkoOhLum7xS1s8JwIyeVjTPQ2OX
 iOKH4vFu3sIBzq0dz1Wrha2uBRDur+nzDL2EqD9EuSBDMN0Du5cVic94QoaXNUeO
 9yBiDNF8p8xLX2TdfzeGPx9hxchs+2Tx9GV1B62KNpkbIo3Ix0I=
 =jDr3
 -----END PGP SIGNATURE-----

Merge tag 'drm-misc-next-2020-07-22' of git://anongit.freedesktop.org/drm/drm-misc into drm-next

drm-misc-next for v5.9:

UAPI Changes:

Cross-subsystem Changes:
- Convert panel-dsi-cm and ingenic bindings to YAML.
- Add lockdep annotations for dma-fence. \o/
- Describe why indefinite fences are a bad idea
- Update binding for rocktech jh057n00900.

Core Changes:
- Add vblank workers.
- Use spin_(un)lock_irq instead of the irqsave/restore variants in crtc code.
- Add managed vram helpers.
- Convert more logging to drm functions.
- Replace more http links with https in core and drivers.
- Cleanup to ttm iomem functions and implementation.
- Remove TTM CMA memtype as it doesn't work correctly.
- Remove TTM_MEMTYPE_FLAG_MAPPABLE for many drivers that have no
  unmappable memory resources.

Driver Changes:
- Add CRC support to nouveau, using the new vblank workers.
- Dithering and atomic state fix for nouveau.
- Fixes for Frida FRD350H54004 panel.
- Add support for OSD mode (sprite planes), IPU (scaling) and multiple
  panels/bridges to ingenic.
- Use managed vram helpers in ast.
- Assorted small fixes to ingenic, i810, mxsfb.
- Remove optional unused ttm dummy functions.

Signed-off-by: Dave Airlie <airlied@redhat.com>

From: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/d6bf269e-ccb2-8a7b-fdae-226e9e3f8274@linux.intel.com
This commit is contained in:
Dave Airlie 2020-07-23 14:01:37 +10:00
commit 4145cb5416
102 changed files with 4398 additions and 783 deletions

View File

@ -165,6 +165,7 @@ examples:
- |
#include <dt-bindings/clock/imx8mq-clock.h>
#include <dt-bindings/gpio/gpio.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/reset/imx8mq-reset.h>
@ -191,12 +192,12 @@ examples:
phy-names = "dphy";
panel@0 {
#address-cells = <1>;
#size-cells = <0>;
compatible = "rocktech,jh057n00900";
reg = <0>;
port@0 {
reg = <0>;
vcc-supply = <&reg_2v8_p>;
iovcc-supply = <&reg_1v8_p>;
reset-gpios = <&gpio3 13 GPIO_ACTIVE_LOW>;
port {
panel_in: endpoint {
remote-endpoint = <&mipi_dsi_out>;
};

View File

@ -0,0 +1,65 @@
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/display/ingenic,ipu.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Ingenic SoCs Image Processing Unit (IPU) devicetree bindings
maintainers:
- Paul Cercueil <paul@crapouillou.net>
properties:
compatible:
oneOf:
- enum:
- ingenic,jz4725b-ipu
- ingenic,jz4760-ipu
- items:
- const: ingenic,jz4770-ipu
- const: ingenic,jz4760-ipu
reg:
maxItems: 1
interrupts:
maxItems: 1
clocks:
maxItems: 1
clock-names:
const: ipu
patternProperties:
"^ports?$":
description: OF graph bindings (specified in bindings/graph.txt).
required:
- compatible
- reg
- interrupts
- clocks
- clock-names
additionalProperties: false
examples:
- |
#include <dt-bindings/clock/jz4770-cgu.h>
ipu@13080000 {
compatible = "ingenic,jz4770-ipu", "ingenic,jz4760-ipu";
reg = <0x13080000 0x800>;
interrupt-parent = <&intc>;
interrupts = <29>;
clocks = <&cgu JZ4770_CLK_IPU>;
clock-names = "ipu";
port {
ipu_ep: endpoint {
remote-endpoint = <&lcdc_ep>;
};
};
};

View File

@ -1,45 +0,0 @@
Ingenic JZ47xx LCD driver
Required properties:
- compatible: one of:
* ingenic,jz4740-lcd
* ingenic,jz4725b-lcd
* ingenic,jz4770-lcd
- reg: LCD registers location and length
- clocks: LCD pixclock and device clock specifiers.
The device clock is only required on the JZ4740.
- clock-names: "lcd_pclk" and "lcd"
- interrupts: Specifies the interrupt line the LCD controller is connected to.
Example:
panel {
compatible = "sharp,ls020b1dd01d";
backlight = <&backlight>;
power-supply = <&vcc>;
port {
panel_input: endpoint {
remote-endpoint = <&panel_output>;
};
};
};
lcd: lcd-controller@13050000 {
compatible = "ingenic,jz4725b-lcd";
reg = <0x13050000 0x1000>;
interrupt-parent = <&intc>;
interrupts = <31>;
clocks = <&cgu JZ4725B_CLK_LCD>;
clock-names = "lcd";
port {
panel_output: endpoint {
remote-endpoint = <&panel_input>;
};
};
};

View File

@ -0,0 +1,126 @@
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/display/ingenic,lcd.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Ingenic SoCs LCD controller devicetree bindings
maintainers:
- Paul Cercueil <paul@crapouillou.net>
properties:
$nodename:
pattern: "^lcd-controller@[0-9a-f]+$"
compatible:
enum:
- ingenic,jz4740-lcd
- ingenic,jz4725b-lcd
- ingenic,jz4770-lcd
- ingenic,jz4780-lcd
reg:
maxItems: 1
interrupts:
maxItems: 1
clocks:
items:
- description: Pixel clock
- description: Module clock
minItems: 1
clock-names:
items:
- const: lcd_pclk
- const: lcd
minItems: 1
port:
description: OF graph bindings (specified in bindings/graph.txt).
ports:
description: OF graph bindings (specified in bindings/graph.txt).
type: object
properties:
port@0:
type: object
description: DPI output, to interface with TFT panels.
port@8:
type: object
description: Link to the Image Processing Unit (IPU).
(See ingenic,ipu.yaml).
required:
- port@0
required:
- compatible
- reg
- interrupts
- clocks
- clock-names
if:
properties:
compatible:
contains:
enum:
- ingenic,jz4740-lcd
- ingenic,jz4780-lcd
then:
properties:
clocks:
minItems: 2
clock-names:
minItems: 2
else:
properties:
clocks:
maxItems: 1
clock-names:
maxItems: 1
additionalProperties: false
examples:
- |
#include <dt-bindings/clock/jz4740-cgu.h>
lcd-controller@13050000 {
compatible = "ingenic,jz4740-lcd";
reg = <0x13050000 0x1000>;
interrupt-parent = <&intc>;
interrupts = <30>;
clocks = <&cgu JZ4740_CLK_LCD_PCLK>, <&cgu JZ4740_CLK_LCD>;
clock-names = "lcd_pclk", "lcd";
port {
endpoint {
remote-endpoint = <&panel_input>;
};
};
};
- |
#include <dt-bindings/clock/jz4725b-cgu.h>
lcd-controller@13050000 {
compatible = "ingenic,jz4725b-lcd";
reg = <0x13050000 0x1000>;
interrupt-parent = <&intc>;
interrupts = <31>;
clocks = <&cgu JZ4725B_CLK_LCD>;
clock-names = "lcd_pclk";
port {
endpoint {
remote-endpoint = <&panel_input>;
};
};
};

View File

@ -1,29 +0,0 @@
Generic MIPI DSI Command Mode Panel
===================================
Required properties:
- compatible: "panel-dsi-cm"
Optional properties:
- label: a symbolic name for the panel
- reset-gpios: panel reset gpio
- te-gpios: panel TE gpio
Required nodes:
- Video port for DSI input
Example
-------
lcd0: display {
compatible = "tpo,taal", "panel-dsi-cm";
label = "lcd0";
reset-gpios = <&gpio4 6 GPIO_ACTIVE_HIGH>;
port {
lcd0_in: endpoint {
remote-endpoint = <&dsi1_out_ep>;
};
};
};

View File

@ -0,0 +1,86 @@
# SPDX-License-Identifier: (GPL-2.0-only or BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/display/panel/panel-dsi-cm.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: DSI command mode panels
maintainers:
- Tomi Valkeinen <tomi.valkeinen@ti.com>
- Sebastian Reichel <sre@kernel.org>
description: |
This binding file is a collection of the DSI panels that
are usually driven in command mode. If no backlight is
referenced via the optional backlight property, the DSI
panel is assumed to have native backlight support.
The panel may use an OF graph binding for the association
to the display, or it may be a direct child node of the
display.
allOf:
- $ref: panel-common.yaml#
properties:
compatible:
items:
- enum:
- motorola,droid4-panel # Panel from Motorola Droid4 phone
- nokia,himalaya # Panel from Nokia N950 phone
- tpo,taal # Panel from OMAP4 SDP board
- const: panel-dsi-cm # Generic DSI command mode panel compatible fallback
reg:
maxItems: 1
description: DSI virtual channel
vddi-supply:
description:
Display panels require power to be supplied. While several panels need
more than one power supply with panel-specific constraints governing the
order and timings of the power supplies, in many cases a single power
supply is sufficient, either because the panel has a single power rail, or
because all its power rails can be driven by the same supply. In that case
the vddi-supply property specifies the supply powering the panel as a
phandle to a regulator.
vpnl-supply:
description:
When the display panel needs a second power supply, this property can be
used in addition to vddi-supply. Both supplies will be enabled at the
same time before the panel is being accessed.
width-mm: true
height-mm: true
label: true
rotation: true
panel-timing: true
port: true
reset-gpios: true
te-gpios: true
backlight: true
additionalProperties: false
required:
- compatible
- reg
examples:
- |
#include <dt-bindings/gpio/gpio.h>
dsi-controller {
#address-cells = <1>;
#size-cells = <0>;
panel@0 {
compatible = "tpo,taal", "panel-dsi-cm";
reg = <0>;
reset-gpios = <&gpio4 6 GPIO_ACTIVE_HIGH>;
};
};
...

View File

@ -24,6 +24,7 @@ properties:
# Xingbangda XBD599 5.99" 720x1440 TFT LCD panel
- xingbangda,xbd599
port: true
reg:
maxItems: 1
description: DSI virtual channel

View File

@ -133,6 +133,18 @@ DMA Fences
.. kernel-doc:: drivers/dma-buf/dma-fence.c
:doc: DMA fences overview
DMA Fence Cross-Driver Contract
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. kernel-doc:: drivers/dma-buf/dma-fence.c
:doc: fence cross-driver contract
DMA Fence Signalling Annotations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. kernel-doc:: drivers/dma-buf/dma-fence.c
:doc: fence signalling annotation
DMA Fences Functions Reference
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -166,3 +178,73 @@ DMA Fence uABI/Sync File
.. kernel-doc:: include/linux/sync_file.h
:internal:
Indefinite DMA Fences
~~~~~~~~~~~~~~~~~~~~
At various times &dma_fence with an indefinite time until dma_fence_wait()
finishes have been proposed. Examples include:
* Future fences, used in HWC1 to signal when a buffer isn't used by the display
any longer, and created with the screen update that makes the buffer visible.
The time this fence completes is entirely under userspace's control.
* Proxy fences, proposed to handle &drm_syncobj for which the fence has not yet
been set. Used to asynchronously delay command submission.
* Userspace fences or gpu futexes, fine-grained locking within a command buffer
that userspace uses for synchronization across engines or with the CPU, which
are then imported as a DMA fence for integration into existing winsys
protocols.
* Long-running compute command buffers, while still using traditional end of
batch DMA fences for memory management instead of context preemption DMA
fences which get reattached when the compute job is rescheduled.
Common to all these schemes is that userspace controls the dependencies of these
fences and controls when they fire. Mixing indefinite fences with normal
in-kernel DMA fences does not work, even when a fallback timeout is included to
protect against malicious userspace:
* Only the kernel knows about all DMA fence dependencies, userspace is not aware
of dependencies injected due to memory management or scheduler decisions.
* Only userspace knows about all dependencies in indefinite fences and when
exactly they will complete, the kernel has no visibility.
Furthermore the kernel has to be able to hold up userspace command submission
for memory management needs, which means we must support indefinite fences being
dependent upon DMA fences. If the kernel also support indefinite fences in the
kernel like a DMA fence, like any of the above proposal would, there is the
potential for deadlocks.
.. kernel-render:: DOT
:alt: Indefinite Fencing Dependency Cycle
:caption: Indefinite Fencing Dependency Cycle
digraph "Fencing Cycle" {
node [shape=box bgcolor=grey style=filled]
kernel [label="Kernel DMA Fences"]
userspace [label="userspace controlled fences"]
kernel -> userspace [label="memory management"]
userspace -> kernel [label="Future fence, fence proxy, ..."]
{ rank=same; kernel userspace }
}
This means that the kernel might accidentally create deadlocks
through memory management dependencies which userspace is unaware of, which
randomly hangs workloads until the timeout kicks in. Workloads, which from
userspace's perspective, do not contain a deadlock. In such a mixed fencing
architecture there is no single entity with knowledge of all dependencies.
Thefore preventing such deadlocks from within the kernel is not possible.
The only solution to avoid dependencies loops is by not allowing indefinite
fences in the kernel. This means:
* No future fences, proxy fences or userspace fences imported as DMA fences,
with or without a timeout.
* No DMA fences that signal end of batchbuffer for command submission where
userspace is allowed to use userspace fencing or long running compute
workloads. This also means no implicit fencing for shared buffers in these
cases.

View File

@ -127,7 +127,7 @@ At least on the EP9315 there is a silicon bug which causes bit 27 of
the VIDSCRNPAGE (framebuffer physical offset) to be tied low. There is
an unofficial errata for this bug at::
http://marc.info/?l=linux-arm-kernel&m=110061245502000&w=2
https://marc.info/?l=linux-arm-kernel&m=110061245502000&w=2
By default the EP93xx framebuffer driver checks if the allocated physical
address has bit 27 set. If it does, then the memory is freed and an

View File

@ -543,3 +543,18 @@ Vertical Blanking and Interrupt Handling Functions Reference
.. kernel-doc:: drivers/gpu/drm/drm_vblank.c
:export:
Vertical Blank Work
===================
.. kernel-doc:: drivers/gpu/drm/drm_vblank_work.c
:doc: vblank works
Vertical Blank Work Functions Reference
---------------------------------------
.. kernel-doc:: include/drm/drm_vblank_work.h
:internal:
.. kernel-doc:: drivers/gpu/drm/drm_vblank_work.c
:export:

View File

@ -185,7 +185,7 @@ enhancing the kernel code to adapt as a kernel module and also did the
implementation of the user space side [3]. Now (2009) Tiago Vignatti and Dave
Airlie finally put this work in shape and queued to Jesse Barnes' PCI tree.
0) http://cgit.freedesktop.org/xorg/xserver/commit/?id=4b42448a2388d40f257774fbffdccaea87bd0347
1) http://lists.freedesktop.org/archives/xorg/2005-March/006663.html
2) http://lists.freedesktop.org/archives/xorg/2005-March/006745.html
3) http://lists.freedesktop.org/archives/xorg/2007-October/029507.html
0) https://cgit.freedesktop.org/xorg/xserver/commit/?id=4b42448a2388d40f257774fbffdccaea87bd0347
1) https://lists.freedesktop.org/archives/xorg/2005-March/006663.html
2) https://lists.freedesktop.org/archives/xorg/2005-March/006745.html
3) https://lists.freedesktop.org/archives/xorg/2007-October/029507.html

View File

@ -64,6 +64,52 @@ static atomic64_t dma_fence_context_counter = ATOMIC64_INIT(1);
* &dma_buf.resv pointer.
*/
/**
* DOC: fence cross-driver contract
*
* Since &dma_fence provide a cross driver contract, all drivers must follow the
* same rules:
*
* * Fences must complete in a reasonable time. Fences which represent kernels
* and shaders submitted by userspace, which could run forever, must be backed
* up by timeout and gpu hang recovery code. Minimally that code must prevent
* further command submission and force complete all in-flight fences, e.g.
* when the driver or hardware do not support gpu reset, or if the gpu reset
* failed for some reason. Ideally the driver supports gpu recovery which only
* affects the offending userspace context, and no other userspace
* submissions.
*
* * Drivers may have different ideas of what completion within a reasonable
* time means. Some hang recovery code uses a fixed timeout, others a mix
* between observing forward progress and increasingly strict timeouts.
* Drivers should not try to second guess timeout handling of fences from
* other drivers.
*
* * To ensure there's no deadlocks of dma_fence_wait() against other locks
* drivers should annotate all code required to reach dma_fence_signal(),
* which completes the fences, with dma_fence_begin_signalling() and
* dma_fence_end_signalling().
*
* * Drivers are allowed to call dma_fence_wait() while holding dma_resv_lock().
* This means any code required for fence completion cannot acquire a
* &dma_resv lock. Note that this also pulls in the entire established
* locking hierarchy around dma_resv_lock() and dma_resv_unlock().
*
* * Drivers are allowed to call dma_fence_wait() from their &shrinker
* callbacks. This means any code required for fence completion cannot
* allocate memory with GFP_KERNEL.
*
* * Drivers are allowed to call dma_fence_wait() from their &mmu_notifier
* respectively &mmu_interval_notifier callbacks. This means any code required
* for fence completeion cannot allocate memory with GFP_NOFS or GFP_NOIO.
* Only GFP_ATOMIC is permissible, which might fail.
*
* Note that only GPU drivers have a reasonable excuse for both requiring
* &mmu_interval_notifier and &shrinker callbacks at the same time as having to
* track asynchronous compute work using &dma_fence. No driver outside of
* drivers/gpu should ever call dma_fence_wait() in such contexts.
*/
static const char *dma_fence_stub_get_name(struct dma_fence *fence)
{
return "stub";
@ -110,6 +156,160 @@ u64 dma_fence_context_alloc(unsigned num)
}
EXPORT_SYMBOL(dma_fence_context_alloc);
/**
* DOC: fence signalling annotation
*
* Proving correctness of all the kernel code around &dma_fence through code
* review and testing is tricky for a few reasons:
*
* * It is a cross-driver contract, and therefore all drivers must follow the
* same rules for lock nesting order, calling contexts for various functions
* and anything else significant for in-kernel interfaces. But it is also
* impossible to test all drivers in a single machine, hence brute-force N vs.
* N testing of all combinations is impossible. Even just limiting to the
* possible combinations is infeasible.
*
* * There is an enormous amount of driver code involved. For render drivers
* there's the tail of command submission, after fences are published,
* scheduler code, interrupt and workers to process job completion,
* and timeout, gpu reset and gpu hang recovery code. Plus for integration
* with core mm with have &mmu_notifier, respectively &mmu_interval_notifier,
* and &shrinker. For modesetting drivers there's the commit tail functions
* between when fences for an atomic modeset are published, and when the
* corresponding vblank completes, including any interrupt processing and
* related workers. Auditing all that code, across all drivers, is not
* feasible.
*
* * Due to how many other subsystems are involved and the locking hierarchies
* this pulls in there is extremely thin wiggle-room for driver-specific
* differences. &dma_fence interacts with almost all of the core memory
* handling through page fault handlers via &dma_resv, dma_resv_lock() and
* dma_resv_unlock(). On the other side it also interacts through all
* allocation sites through &mmu_notifier and &shrinker.
*
* Furthermore lockdep does not handle cross-release dependencies, which means
* any deadlocks between dma_fence_wait() and dma_fence_signal() can't be caught
* at runtime with some quick testing. The simplest example is one thread
* waiting on a &dma_fence while holding a lock::
*
* lock(A);
* dma_fence_wait(B);
* unlock(A);
*
* while the other thread is stuck trying to acquire the same lock, which
* prevents it from signalling the fence the previous thread is stuck waiting
* on::
*
* lock(A);
* unlock(A);
* dma_fence_signal(B);
*
* By manually annotating all code relevant to signalling a &dma_fence we can
* teach lockdep about these dependencies, which also helps with the validation
* headache since now lockdep can check all the rules for us::
*
* cookie = dma_fence_begin_signalling();
* lock(A);
* unlock(A);
* dma_fence_signal(B);
* dma_fence_end_signalling(cookie);
*
* For using dma_fence_begin_signalling() and dma_fence_end_signalling() to
* annotate critical sections the following rules need to be observed:
*
* * All code necessary to complete a &dma_fence must be annotated, from the
* point where a fence is accessible to other threads, to the point where
* dma_fence_signal() is called. Un-annotated code can contain deadlock issues,
* and due to the very strict rules and many corner cases it is infeasible to
* catch these just with review or normal stress testing.
*
* * &struct dma_resv deserves a special note, since the readers are only
* protected by rcu. This means the signalling critical section starts as soon
* as the new fences are installed, even before dma_resv_unlock() is called.
*
* * The only exception are fast paths and opportunistic signalling code, which
* calls dma_fence_signal() purely as an optimization, but is not required to
* guarantee completion of a &dma_fence. The usual example is a wait IOCTL
* which calls dma_fence_signal(), while the mandatory completion path goes
* through a hardware interrupt and possible job completion worker.
*
* * To aid composability of code, the annotations can be freely nested, as long
* as the overall locking hierarchy is consistent. The annotations also work
* both in interrupt and process context. Due to implementation details this
* requires that callers pass an opaque cookie from
* dma_fence_begin_signalling() to dma_fence_end_signalling().
*
* * Validation against the cross driver contract is implemented by priming
* lockdep with the relevant hierarchy at boot-up. This means even just
* testing with a single device is enough to validate a driver, at least as
* far as deadlocks with dma_fence_wait() against dma_fence_signal() are
* concerned.
*/
#ifdef CONFIG_LOCKDEP
struct lockdep_map dma_fence_lockdep_map = {
.name = "dma_fence_map"
};
/**
* dma_fence_begin_signalling - begin a critical DMA fence signalling section
*
* Drivers should use this to annotate the beginning of any code section
* required to eventually complete &dma_fence by calling dma_fence_signal().
*
* The end of these critical sections are annotated with
* dma_fence_end_signalling().
*
* Returns:
*
* Opaque cookie needed by the implementation, which needs to be passed to
* dma_fence_end_signalling().
*/
bool dma_fence_begin_signalling(void)
{
/* explicitly nesting ... */
if (lock_is_held_type(&dma_fence_lockdep_map, 1))
return true;
/* rely on might_sleep check for soft/hardirq locks */
if (in_atomic())
return true;
/* ... and non-recursive readlock */
lock_acquire(&dma_fence_lockdep_map, 0, 0, 1, 1, NULL, _RET_IP_);
return false;
}
EXPORT_SYMBOL(dma_fence_begin_signalling);
/**
* dma_fence_end_signalling - end a critical DMA fence signalling section
*
* Closes a critical section annotation opened by dma_fence_begin_signalling().
*/
void dma_fence_end_signalling(bool cookie)
{
if (cookie)
return;
lock_release(&dma_fence_lockdep_map, _RET_IP_);
}
EXPORT_SYMBOL(dma_fence_end_signalling);
void __dma_fence_might_wait(void)
{
bool tmp;
tmp = lock_is_held_type(&dma_fence_lockdep_map, 1);
if (tmp)
lock_release(&dma_fence_lockdep_map, _THIS_IP_);
lock_map_acquire(&dma_fence_lockdep_map);
lock_map_release(&dma_fence_lockdep_map);
if (tmp)
lock_acquire(&dma_fence_lockdep_map, 0, 0, 1, 1, NULL, _THIS_IP_);
}
#endif
/**
* dma_fence_signal_locked - signal completion of a fence
* @fence: the fence to signal
@ -170,14 +370,19 @@ int dma_fence_signal(struct dma_fence *fence)
{
unsigned long flags;
int ret;
bool tmp;
if (!fence)
return -EINVAL;
tmp = dma_fence_begin_signalling();
spin_lock_irqsave(fence->lock, flags);
ret = dma_fence_signal_locked(fence);
spin_unlock_irqrestore(fence->lock, flags);
dma_fence_end_signalling(tmp);
return ret;
}
EXPORT_SYMBOL(dma_fence_signal);
@ -210,6 +415,8 @@ dma_fence_wait_timeout(struct dma_fence *fence, bool intr, signed long timeout)
might_sleep();
__dma_fence_might_wait();
trace_dma_fence_wait_start(fence);
if (fence->ops->wait)
ret = fence->ops->wait(fence, intr, timeout);

View File

@ -36,6 +36,7 @@
#include <linux/export.h>
#include <linux/mm.h>
#include <linux/sched/mm.h>
#include <linux/mmu_notifier.h>
/**
* DOC: Reservation Object Overview
@ -116,6 +117,13 @@ static int __init dma_resv_lockdep(void)
if (ret == -EDEADLK)
dma_resv_lock_slow(&obj, &ctx);
fs_reclaim_acquire(GFP_KERNEL);
#ifdef CONFIG_MMU_NOTIFIER
lock_map_acquire(&__mmu_notifier_invalidate_range_start_map);
__dma_fence_might_wait();
lock_map_release(&__mmu_notifier_invalidate_range_start_map);
#else
__dma_fence_might_wait();
#endif
fs_reclaim_release(GFP_KERNEL);
ww_mutex_unlock(&obj.lock);
ww_acquire_fini(&ctx);

View File

@ -18,7 +18,7 @@ drm-y := drm_auth.o drm_cache.o \
drm_dumb_buffers.o drm_mode_config.o drm_vblank.o \
drm_syncobj.o drm_lease.o drm_writeback.o drm_client.o \
drm_client_modeset.o drm_atomic_uapi.o drm_hdcp.o \
drm_managed.o
drm_managed.o drm_vblank_work.o
drm-$(CONFIG_DRM_LEGACY) += drm_legacy_misc.o drm_bufs.o drm_context.o drm_dma.o drm_scatter.o drm_lock.o
drm-$(CONFIG_DRM_LIB_RANDOM) += lib/drm_random.o

View File

@ -94,7 +94,7 @@ static int amdgpu_init_mem_type(struct ttm_bo_device *bdev, uint32_t type,
man->func = &amdgpu_gtt_mgr_func;
man->available_caching = TTM_PL_MASK_CACHING;
man->default_caching = TTM_PL_FLAG_CACHED;
man->flags = TTM_MEMTYPE_FLAG_MAPPABLE | TTM_MEMTYPE_FLAG_CMA;
man->flags = TTM_MEMTYPE_FLAG_MAPPABLE;
break;
case TTM_PL_VRAM:
/* "On-card" video ram */
@ -109,7 +109,7 @@ static int amdgpu_init_mem_type(struct ttm_bo_device *bdev, uint32_t type,
case AMDGPU_PL_OA:
/* On-chip GDS memory*/
man->func = &ttm_bo_manager_func;
man->flags = TTM_MEMTYPE_FLAG_FIXED | TTM_MEMTYPE_FLAG_CMA;
man->flags = TTM_MEMTYPE_FLAG_FIXED;
man->available_caching = TTM_PL_FLAG_UNCACHED;
man->default_caching = TTM_PL_FLAG_UNCACHED;
break;
@ -837,10 +837,6 @@ static int amdgpu_ttm_io_mem_reserve(struct ttm_bo_device *bdev, struct ttm_mem_
return 0;
}
static void amdgpu_ttm_io_mem_free(struct ttm_bo_device *bdev, struct ttm_mem_reg *mem)
{
}
static unsigned long amdgpu_ttm_io_mem_pfn(struct ttm_buffer_object *bo,
unsigned long page_offset)
{
@ -1755,7 +1751,6 @@ static struct ttm_bo_driver amdgpu_bo_driver = {
.release_notify = &amdgpu_bo_release_notify,
.fault_reserve_notify = &amdgpu_bo_fault_reserve_notify,
.io_mem_reserve = &amdgpu_ttm_io_mem_reserve,
.io_mem_free = &amdgpu_ttm_io_mem_free,
.io_mem_pfn = amdgpu_ttm_io_mem_pfn,
.access_memory = &amdgpu_ttm_access_memory,
.del_from_lru_notify = &amdgpu_vm_del_from_lru_notify

View File

@ -3,7 +3,7 @@
# Makefile for the drm device driver. This driver provides support for the
# Direct Rendering Infrastructure (DRI) in XFree86 4.1.0 and higher.
ast-y := ast_cursor.o ast_drv.o ast_main.o ast_mode.o ast_ttm.o ast_post.o \
ast-y := ast_cursor.o ast_drv.o ast_main.o ast_mm.o ast_mode.o ast_post.o \
ast_dp501.o
obj-$(CONFIG_DRM_AST) := ast.o

View File

@ -110,7 +110,6 @@ struct ast_private {
uint32_t dram_bus_width;
uint32_t dram_type;
uint32_t mclk;
uint32_t vram_size;
int fb_mtrr;
@ -292,7 +291,6 @@ int ast_mode_config_init(struct ast_private *ast);
#define AST_MM_ALIGN_MASK ((1 << AST_MM_ALIGN_SHIFT) - 1)
int ast_mm_init(struct ast_private *ast);
void ast_mm_fini(struct ast_private *ast);
/* ast post */
void ast_enable_vga(struct drm_device *dev);

View File

@ -378,38 +378,6 @@ static int ast_get_dram_info(struct drm_device *dev)
return 0;
}
static u32 ast_get_vram_info(struct drm_device *dev)
{
struct ast_private *ast = to_ast_private(dev);
u8 jreg;
u32 vram_size;
ast_open_key(ast);
vram_size = AST_VIDMEM_DEFAULT_SIZE;
jreg = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xaa, 0xff);
switch (jreg & 3) {
case 0: vram_size = AST_VIDMEM_SIZE_8M; break;
case 1: vram_size = AST_VIDMEM_SIZE_16M; break;
case 2: vram_size = AST_VIDMEM_SIZE_32M; break;
case 3: vram_size = AST_VIDMEM_SIZE_64M; break;
}
jreg = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0x99, 0xff);
switch (jreg & 0x03) {
case 1:
vram_size -= 0x100000;
break;
case 2:
vram_size -= 0x200000;
break;
case 3:
vram_size -= 0x400000;
break;
}
return vram_size;
}
int ast_driver_load(struct drm_device *dev, unsigned long flags)
{
struct ast_private *ast;
@ -450,16 +418,14 @@ int ast_driver_load(struct drm_device *dev, unsigned long flags)
ast_detect_chip(dev, &need_post);
if (need_post)
ast_post_gpu(dev);
ret = ast_get_dram_info(dev);
if (ret)
goto out_free;
ast->vram_size = ast_get_vram_info(dev);
drm_info(dev, "dram MCLK=%u Mhz type=%d bus_width=%d size=%08x\n",
ast->mclk, ast->dram_type,
ast->dram_bus_width, ast->vram_size);
drm_info(dev, "dram MCLK=%u Mhz type=%d bus_width=%d\n",
ast->mclk, ast->dram_type, ast->dram_bus_width);
if (need_post)
ast_post_gpu(dev);
ret = ast_mm_init(ast);
if (ret)
@ -486,6 +452,5 @@ void ast_driver_unload(struct drm_device *dev)
ast_release_firmware(dev);
kfree(ast->dp501_fw_addr);
ast_mm_fini(ast);
kfree(ast);
}

View File

@ -28,22 +28,72 @@
#include <linux/pci.h>
#include <drm/drm_print.h>
#include <drm/drm_gem_vram_helper.h>
#include <drm/drm_managed.h>
#include <drm/drm_print.h>
#include "ast_drv.h"
static u32 ast_get_vram_size(struct ast_private *ast)
{
u8 jreg;
u32 vram_size;
ast_open_key(ast);
vram_size = AST_VIDMEM_DEFAULT_SIZE;
jreg = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xaa, 0xff);
switch (jreg & 3) {
case 0:
vram_size = AST_VIDMEM_SIZE_8M;
break;
case 1:
vram_size = AST_VIDMEM_SIZE_16M;
break;
case 2:
vram_size = AST_VIDMEM_SIZE_32M;
break;
case 3:
vram_size = AST_VIDMEM_SIZE_64M;
break;
}
jreg = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0x99, 0xff);
switch (jreg & 0x03) {
case 1:
vram_size -= 0x100000;
break;
case 2:
vram_size -= 0x200000;
break;
case 3:
vram_size -= 0x400000;
break;
}
return vram_size;
}
static void ast_mm_release(struct drm_device *dev, void *ptr)
{
struct ast_private *ast = to_ast_private(dev);
arch_phys_wc_del(ast->fb_mtrr);
arch_io_free_memtype_wc(pci_resource_start(dev->pdev, 0),
pci_resource_len(dev->pdev, 0));
}
int ast_mm_init(struct ast_private *ast)
{
struct drm_vram_mm *vmm;
u32 vram_size;
int ret;
struct drm_device *dev = ast->dev;
vmm = drm_vram_helper_alloc_mm(
dev, pci_resource_start(dev->pdev, 0),
ast->vram_size);
if (IS_ERR(vmm)) {
ret = PTR_ERR(vmm);
vram_size = ast_get_vram_size(ast);
ret = drmm_vram_helper_init(dev, pci_resource_start(dev->pdev, 0),
vram_size);
if (ret) {
drm_err(dev, "Error initializing VRAM MM; %d\n", ret);
return ret;
}
@ -53,16 +103,5 @@ int ast_mm_init(struct ast_private *ast)
ast->fb_mtrr = arch_phys_wc_add(pci_resource_start(dev->pdev, 0),
pci_resource_len(dev->pdev, 0));
return 0;
}
void ast_mm_fini(struct ast_private *ast)
{
struct drm_device *dev = ast->dev;
drm_vram_helper_release_mm(dev);
arch_phys_wc_del(ast->fb_mtrr);
arch_io_free_memtype_wc(pci_resource_start(dev->pdev, 0),
pci_resource_len(dev->pdev, 0));
return drmm_add_action_or_reset(dev, ast_mm_release, NULL);
}

View File

@ -1844,9 +1844,7 @@ static void connector_bad_edid(struct drm_connector *connector,
if (connector->bad_edid_counter++ && !drm_debug_enabled(DRM_UT_KMS))
return;
dev_warn(connector->dev->dev,
"%s: EDID is invalid:\n",
connector->name);
drm_warn(connector->dev, "%s: EDID is invalid:\n", connector->name);
for (i = 0; i < num_blocks; i++) {
u8 *block = edid + i * EDID_LENGTH;
char prefix[20];
@ -5298,7 +5296,7 @@ int drm_add_edid_modes(struct drm_connector *connector, struct edid *edid)
}
if (!drm_edid_is_valid(edid)) {
clear_eld(connector);
dev_warn(connector->dev->dev, "%s: EDID invalid.\n",
drm_warn(connector->dev, "%s: EDID invalid.\n",
connector->name);
return 0;
}

View File

@ -105,8 +105,8 @@ struct drm_gem_cma_object *drm_gem_cma_create(struct drm_device *drm,
cma_obj->vaddr = dma_alloc_wc(drm->dev, size, &cma_obj->paddr,
GFP_KERNEL | __GFP_NOWARN);
if (!cma_obj->vaddr) {
dev_dbg(drm->dev, "failed to allocate buffer with size %zu\n",
size);
drm_dbg(drm, "failed to allocate buffer with size %zu\n",
size);
ret = -ENOMEM;
goto error;
}

View File

@ -10,6 +10,7 @@
#include <drm/drm_gem_framebuffer_helper.h>
#include <drm/drm_gem_ttm_helper.h>
#include <drm/drm_gem_vram_helper.h>
#include <drm/drm_managed.h>
#include <drm/drm_mode.h>
#include <drm/drm_plane.h>
#include <drm/drm_prime.h>
@ -40,12 +41,11 @@ static const struct drm_gem_object_funcs drm_gem_vram_object_funcs;
* the frame's scanout buffer or the cursor image. If there's no more space
* left in VRAM, inactive GEM objects can be moved to system memory.
*
* The easiest way to use the VRAM helper library is to call
* drm_vram_helper_alloc_mm(). The function allocates and initializes an
* instance of &struct drm_vram_mm in &struct drm_device.vram_mm . Use
* &DRM_GEM_VRAM_DRIVER to initialize &struct drm_driver and
* &DRM_VRAM_MM_FILE_OPERATIONS to initialize &struct file_operations;
* as illustrated below.
* To initialize the VRAM helper library call drmm_vram_helper_alloc_mm().
* The function allocates and initializes an instance of &struct drm_vram_mm
* in &struct drm_device.vram_mm . Use &DRM_GEM_VRAM_DRIVER to initialize
* &struct drm_driver and &DRM_VRAM_MM_FILE_OPERATIONS to initialize
* &struct file_operations; as illustrated below.
*
* .. code-block:: c
*
@ -69,7 +69,7 @@ static const struct drm_gem_object_funcs drm_gem_vram_object_funcs;
* // setup device, vram base and size
* // ...
*
* ret = drm_vram_helper_alloc_mm(dev, vram_base, vram_size);
* ret = drmm_vram_helper_alloc_mm(dev, vram_base, vram_size);
* if (ret)
* return ret;
* return 0;
@ -81,20 +81,12 @@ static const struct drm_gem_object_funcs drm_gem_vram_object_funcs;
* manages an area of video RAM with VRAM MM and provides GEM VRAM objects
* to userspace.
*
* To clean up the VRAM memory management, call drm_vram_helper_release_mm()
* in the driver's clean-up code.
* You don't have to clean up the instance of VRAM MM.
* drmm_vram_helper_alloc_mm() is a managed interface that installs a
* clean-up handler to run during the DRM device's release.
*
* .. code-block:: c
*
* void fini_drm_driver()
* {
* struct drm_device *dev = ...;
*
* drm_vram_helper_release_mm(dev);
* }
*
* For drawing or scanout operations, buffer object have to be pinned in video
* RAM. Call drm_gem_vram_pin() with &DRM_GEM_VRAM_PL_FLAG_VRAM or
* For drawing or scanout operations, rsp. buffer objects have to be pinned
* in video RAM. Call drm_gem_vram_pin() with &DRM_GEM_VRAM_PL_FLAG_VRAM or
* &DRM_GEM_VRAM_PL_FLAG_SYSTEM to pin a buffer object in video RAM or system
* memory. Call drm_gem_vram_unpin() to release the pinned object afterwards.
*
@ -1017,14 +1009,13 @@ static int bo_driver_init_mem_type(struct ttm_bo_device *bdev, uint32_t type,
{
switch (type) {
case TTM_PL_SYSTEM:
man->flags = TTM_MEMTYPE_FLAG_MAPPABLE;
man->flags = 0;
man->available_caching = TTM_PL_MASK_CACHING;
man->default_caching = TTM_PL_FLAG_CACHED;
break;
case TTM_PL_VRAM:
man->func = &ttm_bo_manager_func;
man->flags = TTM_MEMTYPE_FLAG_FIXED |
TTM_MEMTYPE_FLAG_MAPPABLE;
man->flags = TTM_MEMTYPE_FLAG_FIXED;
man->available_caching = TTM_PL_FLAG_UNCACHED |
TTM_PL_FLAG_WC;
man->default_caching = TTM_PL_FLAG_WC;
@ -1067,12 +1058,8 @@ static void bo_driver_move_notify(struct ttm_buffer_object *bo,
static int bo_driver_io_mem_reserve(struct ttm_bo_device *bdev,
struct ttm_mem_reg *mem)
{
struct ttm_mem_type_manager *man = bdev->man + mem->mem_type;
struct drm_vram_mm *vmm = drm_vram_mm_of_bdev(bdev);
if (!(man->flags & TTM_MEMTYPE_FLAG_MAPPABLE))
return -EINVAL;
mem->bus.addr = NULL;
mem->bus.size = mem->num_pages << PAGE_SHIFT;
@ -1094,10 +1081,6 @@ static int bo_driver_io_mem_reserve(struct ttm_bo_device *bdev,
return 0;
}
static void bo_driver_io_mem_free(struct ttm_bo_device *bdev,
struct ttm_mem_reg *mem)
{ }
static struct ttm_bo_driver bo_driver = {
.ttm_tt_create = bo_driver_ttm_tt_create,
.ttm_tt_populate = ttm_pool_populate,
@ -1107,7 +1090,6 @@ static struct ttm_bo_driver bo_driver = {
.evict_flags = bo_driver_evict_flags,
.move_notify = bo_driver_move_notify,
.io_mem_reserve = bo_driver_io_mem_reserve,
.io_mem_free = bo_driver_io_mem_free,
};
/*
@ -1176,17 +1158,7 @@ static void drm_vram_mm_cleanup(struct drm_vram_mm *vmm)
* Helpers for integration with struct drm_device
*/
/**
* drm_vram_helper_alloc_mm - Allocates a device's instance of \
&struct drm_vram_mm
* @dev: the DRM device
* @vram_base: the base address of the video memory
* @vram_size: the size of the video memory in bytes
*
* Returns:
* The new instance of &struct drm_vram_mm on success, or
* an ERR_PTR()-encoded errno code otherwise.
*/
/* deprecated; use drmm_vram_mm_init() */
struct drm_vram_mm *drm_vram_helper_alloc_mm(
struct drm_device *dev, uint64_t vram_base, size_t vram_size)
{
@ -1212,11 +1184,6 @@ struct drm_vram_mm *drm_vram_helper_alloc_mm(
}
EXPORT_SYMBOL(drm_vram_helper_alloc_mm);
/**
* drm_vram_helper_release_mm - Releases a device's instance of \
&struct drm_vram_mm
* @dev: the DRM device
*/
void drm_vram_helper_release_mm(struct drm_device *dev)
{
if (!dev->vram_mm)
@ -1228,6 +1195,41 @@ void drm_vram_helper_release_mm(struct drm_device *dev)
}
EXPORT_SYMBOL(drm_vram_helper_release_mm);
static void drm_vram_mm_release(struct drm_device *dev, void *ptr)
{
drm_vram_helper_release_mm(dev);
}
/**
* drmm_vram_helper_init - Initializes a device's instance of
* &struct drm_vram_mm
* @dev: the DRM device
* @vram_base: the base address of the video memory
* @vram_size: the size of the video memory in bytes
*
* Creates a new instance of &struct drm_vram_mm and stores it in
* struct &drm_device.vram_mm. The instance is auto-managed and cleaned
* up as part of device cleanup. Calling this function multiple times
* will generate an error message.
*
* Returns:
* 0 on success, or a negative errno code otherwise.
*/
int drmm_vram_helper_init(struct drm_device *dev, uint64_t vram_base,
size_t vram_size)
{
struct drm_vram_mm *vram_mm;
if (drm_WARN_ON_ONCE(dev, dev->vram_mm))
return 0;
vram_mm = drm_vram_helper_alloc_mm(dev, vram_base, vram_size);
if (IS_ERR(vram_mm))
return PTR_ERR(vram_mm);
return drmm_add_action_or_reset(dev, drm_vram_mm_release, NULL);
}
EXPORT_SYMBOL(drmm_vram_helper_init);
/*
* Mode-config helpers
*/

View File

@ -21,7 +21,10 @@
* OTHER DEALINGS IN THE SOFTWARE.
*/
#include <linux/kthread.h>
#include <drm/drm_ioctl.h>
#include <drm/drm_vblank.h>
#define DRM_IF_MAJOR 1
#define DRM_IF_MINOR 4
@ -38,6 +41,7 @@ struct drm_master;
struct drm_minor;
struct drm_prime_file_private;
struct drm_printer;
struct drm_vblank_crtc;
/* drm_file.c */
extern struct mutex drm_global_mutex;
@ -93,7 +97,30 @@ void drm_minor_release(struct drm_minor *minor);
void drm_managed_release(struct drm_device *dev);
/* drm_vblank.c */
static inline bool drm_vblank_passed(u64 seq, u64 ref)
{
return (seq - ref) <= (1 << 23);
}
void drm_vblank_disable_and_save(struct drm_device *dev, unsigned int pipe);
int drm_vblank_get(struct drm_device *dev, unsigned int pipe);
void drm_vblank_put(struct drm_device *dev, unsigned int pipe);
u64 drm_vblank_count(struct drm_device *dev, unsigned int pipe);
/* drm_vblank_work.c */
static inline void drm_vblank_flush_worker(struct drm_vblank_crtc *vblank)
{
kthread_flush_worker(vblank->worker);
}
static inline void drm_vblank_destroy_worker(struct drm_vblank_crtc *vblank)
{
kthread_destroy_worker(vblank->worker);
}
int drm_vblank_worker_init(struct drm_vblank_crtc *vblank);
void drm_vblank_cancel_pending_works(struct drm_vblank_crtc *vblank);
void drm_handle_vblank_works(struct drm_vblank_crtc *vblank);
/* IOCTLS */
int drm_wait_vblank_ioctl(struct drm_device *dev, void *data,

View File

@ -225,9 +225,8 @@ int mipi_dbi_buf_copy(void *dst, struct drm_framebuffer *fb,
drm_fb_xrgb8888_to_rgb565(dst, src, fb, clip, swap);
break;
default:
dev_err_once(fb->dev->dev, "Format is not supported: %s\n",
drm_get_format_name(fb->format->format,
&format_name));
drm_err_once(fb->dev, "Format is not supported: %s\n",
drm_get_format_name(fb->format->format, &format_name));
return -EINVAL;
}
@ -295,7 +294,7 @@ static void mipi_dbi_fb_dirty(struct drm_framebuffer *fb, struct drm_rect *rect)
width * height * 2);
err_msg:
if (ret)
dev_err_once(fb->dev->dev, "Failed to update display %d\n", ret);
drm_err_once(fb->dev, "Failed to update display %d\n", ret);
drm_dev_exit(idx);
}

View File

@ -548,7 +548,7 @@ EXPORT_SYMBOL(drm_gtf_mode_complex);
* Generalized Timing Formula is derived from:
*
* GTF Spreadsheet by Andy Morrish (1/5/97)
* available at http://www.vesa.org
* available at https://www.vesa.org
*
* And it is copied from the file of xserver/hw/xfree86/modes/xf86gtf.c.
* What I have done is to translate it by using integer calculation.

View File

@ -25,6 +25,7 @@
*/
#include <linux/export.h>
#include <linux/kthread.h>
#include <linux/moduleparam.h>
#include <drm/drm_crtc.h>
@ -363,7 +364,7 @@ static void drm_update_vblank_count(struct drm_device *dev, unsigned int pipe,
store_vblank(dev, pipe, diff, t_vblank, cur_vblank);
}
static u64 drm_vblank_count(struct drm_device *dev, unsigned int pipe)
u64 drm_vblank_count(struct drm_device *dev, unsigned int pipe)
{
struct drm_vblank_crtc *vblank = &dev->vblank[pipe];
u64 count;
@ -492,16 +493,13 @@ static void vblank_disable_fn(struct timer_list *t)
static void drm_vblank_init_release(struct drm_device *dev, void *ptr)
{
unsigned int pipe;
struct drm_vblank_crtc *vblank = ptr;
for (pipe = 0; pipe < dev->num_crtcs; pipe++) {
struct drm_vblank_crtc *vblank = &dev->vblank[pipe];
drm_WARN_ON(dev, READ_ONCE(vblank->enabled) &&
drm_core_check_feature(dev, DRIVER_MODESET));
drm_WARN_ON(dev, READ_ONCE(vblank->enabled) &&
drm_core_check_feature(dev, DRIVER_MODESET));
del_timer_sync(&vblank->disable_timer);
}
drm_vblank_destroy_worker(vblank);
del_timer_sync(&vblank->disable_timer);
}
/**
@ -511,7 +509,7 @@ static void drm_vblank_init_release(struct drm_device *dev, void *ptr)
*
* This function initializes vblank support for @num_crtcs display pipelines.
* Cleanup is handled automatically through a cleanup function added with
* drmm_add_action().
* drmm_add_action_or_reset().
*
* Returns:
* Zero on success or a negative error code on failure.
@ -530,10 +528,6 @@ int drm_vblank_init(struct drm_device *dev, unsigned int num_crtcs)
dev->num_crtcs = num_crtcs;
ret = drmm_add_action(dev, drm_vblank_init_release, NULL);
if (ret)
return ret;
for (i = 0; i < num_crtcs; i++) {
struct drm_vblank_crtc *vblank = &dev->vblank[i];
@ -542,6 +536,15 @@ int drm_vblank_init(struct drm_device *dev, unsigned int num_crtcs)
init_waitqueue_head(&vblank->queue);
timer_setup(&vblank->disable_timer, vblank_disable_fn, 0);
seqlock_init(&vblank->seqlock);
ret = drmm_add_action_or_reset(dev, drm_vblank_init_release,
vblank);
if (ret)
return ret;
ret = drm_vblank_worker_init(vblank);
if (ret)
return ret;
}
return 0;
@ -1138,7 +1141,7 @@ static int drm_vblank_enable(struct drm_device *dev, unsigned int pipe)
return ret;
}
static int drm_vblank_get(struct drm_device *dev, unsigned int pipe)
int drm_vblank_get(struct drm_device *dev, unsigned int pipe)
{
struct drm_vblank_crtc *vblank = &dev->vblank[pipe];
unsigned long irqflags;
@ -1181,7 +1184,7 @@ int drm_crtc_vblank_get(struct drm_crtc *crtc)
}
EXPORT_SYMBOL(drm_crtc_vblank_get);
static void drm_vblank_put(struct drm_device *dev, unsigned int pipe)
void drm_vblank_put(struct drm_device *dev, unsigned int pipe)
{
struct drm_vblank_crtc *vblank = &dev->vblank[pipe];
@ -1284,15 +1287,17 @@ void drm_crtc_vblank_off(struct drm_crtc *crtc)
unsigned int pipe = drm_crtc_index(crtc);
struct drm_vblank_crtc *vblank = &dev->vblank[pipe];
struct drm_pending_vblank_event *e, *t;
ktime_t now;
unsigned long irqflags;
u64 seq;
if (drm_WARN_ON(dev, pipe >= dev->num_crtcs))
return;
spin_lock_irqsave(&dev->event_lock, irqflags);
/*
* Grab event_lock early to prevent vblank work from being scheduled
* while we're in the middle of shutting down vblank interrupts
*/
spin_lock_irq(&dev->event_lock);
spin_lock(&dev->vbl_lock);
drm_dbg_vbl(dev, "crtc %d, vblank enabled %d, inmodeset %d\n",
@ -1328,11 +1333,18 @@ void drm_crtc_vblank_off(struct drm_crtc *crtc)
drm_vblank_put(dev, pipe);
send_vblank_event(dev, e, seq, now);
}
spin_unlock_irqrestore(&dev->event_lock, irqflags);
/* Cancel any leftover pending vblank work */
drm_vblank_cancel_pending_works(vblank);
spin_unlock_irq(&dev->event_lock);
/* Will be reset by the modeset helpers when re-enabling the crtc by
* calling drm_calc_timestamping_constants(). */
vblank->hwmode.crtc_clock = 0;
/* Wait for any vblank work that's still executing to finish */
drm_vblank_flush_worker(vblank);
}
EXPORT_SYMBOL(drm_crtc_vblank_off);
@ -1351,11 +1363,10 @@ EXPORT_SYMBOL(drm_crtc_vblank_off);
void drm_crtc_vblank_reset(struct drm_crtc *crtc)
{
struct drm_device *dev = crtc->dev;
unsigned long irqflags;
unsigned int pipe = drm_crtc_index(crtc);
struct drm_vblank_crtc *vblank = &dev->vblank[pipe];
spin_lock_irqsave(&dev->vbl_lock, irqflags);
spin_lock_irq(&dev->vbl_lock);
/*
* Prevent subsequent drm_vblank_get() from enabling the vblank
* interrupt by bumping the refcount.
@ -1364,9 +1375,10 @@ void drm_crtc_vblank_reset(struct drm_crtc *crtc)
atomic_inc(&vblank->refcount);
vblank->inmodeset = 1;
}
spin_unlock_irqrestore(&dev->vbl_lock, irqflags);
spin_unlock_irq(&dev->vbl_lock);
drm_WARN_ON(dev, !list_empty(&dev->vblank_event_list));
drm_WARN_ON(dev, !list_empty(&vblank->pending_work));
}
EXPORT_SYMBOL(drm_crtc_vblank_reset);
@ -1416,12 +1428,11 @@ void drm_crtc_vblank_on(struct drm_crtc *crtc)
struct drm_device *dev = crtc->dev;
unsigned int pipe = drm_crtc_index(crtc);
struct drm_vblank_crtc *vblank = &dev->vblank[pipe];
unsigned long irqflags;
if (drm_WARN_ON(dev, pipe >= dev->num_crtcs))
return;
spin_lock_irqsave(&dev->vbl_lock, irqflags);
spin_lock_irq(&dev->vbl_lock);
drm_dbg_vbl(dev, "crtc %d, vblank enabled %d, inmodeset %d\n",
pipe, vblank->enabled, vblank->inmodeset);
@ -1439,7 +1450,7 @@ void drm_crtc_vblank_on(struct drm_crtc *crtc)
*/
if (atomic_read(&vblank->refcount) != 0 || drm_vblank_offdelay == 0)
drm_WARN_ON(dev, drm_vblank_enable(dev, pipe));
spin_unlock_irqrestore(&dev->vbl_lock, irqflags);
spin_unlock_irq(&dev->vbl_lock);
}
EXPORT_SYMBOL(drm_crtc_vblank_on);
@ -1540,7 +1551,6 @@ static void drm_legacy_vblank_post_modeset(struct drm_device *dev,
unsigned int pipe)
{
struct drm_vblank_crtc *vblank = &dev->vblank[pipe];
unsigned long irqflags;
/* vblank is not initialized (IRQ not installed ?), or has been freed */
if (!drm_dev_has_vblank(dev))
@ -1550,9 +1560,9 @@ static void drm_legacy_vblank_post_modeset(struct drm_device *dev,
return;
if (vblank->inmodeset) {
spin_lock_irqsave(&dev->vbl_lock, irqflags);
spin_lock_irq(&dev->vbl_lock);
drm_reset_vblank_timestamp(dev, pipe);
spin_unlock_irqrestore(&dev->vbl_lock, irqflags);
spin_unlock_irq(&dev->vbl_lock);
if (vblank->inmodeset & 0x2)
drm_vblank_put(dev, pipe);
@ -1593,11 +1603,6 @@ int drm_legacy_modeset_ctl_ioctl(struct drm_device *dev, void *data,
return 0;
}
static inline bool vblank_passed(u64 seq, u64 ref)
{
return (seq - ref) <= (1 << 23);
}
static int drm_queue_vblank_event(struct drm_device *dev, unsigned int pipe,
u64 req_seq,
union drm_wait_vblank *vblwait,
@ -1606,7 +1611,6 @@ static int drm_queue_vblank_event(struct drm_device *dev, unsigned int pipe,
struct drm_vblank_crtc *vblank = &dev->vblank[pipe];
struct drm_pending_vblank_event *e;
ktime_t now;
unsigned long flags;
u64 seq;
int ret;
@ -1628,7 +1632,7 @@ static int drm_queue_vblank_event(struct drm_device *dev, unsigned int pipe,
e->event.vbl.crtc_id = crtc->base.id;
}
spin_lock_irqsave(&dev->event_lock, flags);
spin_lock_irq(&dev->event_lock);
/*
* drm_crtc_vblank_off() might have been called after we called
@ -1655,7 +1659,7 @@ static int drm_queue_vblank_event(struct drm_device *dev, unsigned int pipe,
trace_drm_vblank_event_queued(file_priv, pipe, req_seq);
e->sequence = req_seq;
if (vblank_passed(seq, req_seq)) {
if (drm_vblank_passed(seq, req_seq)) {
drm_vblank_put(dev, pipe);
send_vblank_event(dev, e, seq, now);
vblwait->reply.sequence = seq;
@ -1665,12 +1669,12 @@ static int drm_queue_vblank_event(struct drm_device *dev, unsigned int pipe,
vblwait->reply.sequence = req_seq;
}
spin_unlock_irqrestore(&dev->event_lock, flags);
spin_unlock_irq(&dev->event_lock);
return 0;
err_unlock:
spin_unlock_irqrestore(&dev->event_lock, flags);
spin_unlock_irq(&dev->event_lock);
kfree(e);
err_put:
drm_vblank_put(dev, pipe);
@ -1810,7 +1814,7 @@ int drm_wait_vblank_ioctl(struct drm_device *dev, void *data,
}
if ((flags & _DRM_VBLANK_NEXTONMISS) &&
vblank_passed(seq, req_seq)) {
drm_vblank_passed(seq, req_seq)) {
req_seq = seq + 1;
vblwait->request.type &= ~_DRM_VBLANK_NEXTONMISS;
vblwait->request.sequence = req_seq;
@ -1829,7 +1833,7 @@ int drm_wait_vblank_ioctl(struct drm_device *dev, void *data,
drm_dbg_core(dev, "waiting on vblank count %llu, crtc %u\n",
req_seq, pipe);
wait = wait_event_interruptible_timeout(vblank->queue,
vblank_passed(drm_vblank_count(dev, pipe), req_seq) ||
drm_vblank_passed(drm_vblank_count(dev, pipe), req_seq) ||
!READ_ONCE(vblank->enabled),
msecs_to_jiffies(3000));
@ -1878,7 +1882,7 @@ static void drm_handle_vblank_events(struct drm_device *dev, unsigned int pipe)
list_for_each_entry_safe(e, t, &dev->vblank_event_list, base.link) {
if (e->pipe != pipe)
continue;
if (!vblank_passed(seq, e->sequence))
if (!drm_vblank_passed(seq, e->sequence))
continue;
drm_dbg_core(dev, "vblank event on %llu, current %llu\n",
@ -1948,6 +1952,7 @@ bool drm_handle_vblank(struct drm_device *dev, unsigned int pipe)
!atomic_read(&vblank->refcount));
drm_handle_vblank_events(dev, pipe);
drm_handle_vblank_works(vblank);
spin_unlock_irqrestore(&dev->event_lock, irqflags);
@ -2061,7 +2066,6 @@ int drm_crtc_queue_sequence_ioctl(struct drm_device *dev, void *data,
u64 seq;
u64 req_seq;
int ret;
unsigned long spin_flags;
if (!drm_core_check_feature(dev, DRIVER_MODESET))
return -EOPNOTSUPP;
@ -2101,7 +2105,7 @@ int drm_crtc_queue_sequence_ioctl(struct drm_device *dev, void *data,
if (flags & DRM_CRTC_SEQUENCE_RELATIVE)
req_seq += seq;
if ((flags & DRM_CRTC_SEQUENCE_NEXT_ON_MISS) && vblank_passed(seq, req_seq))
if ((flags & DRM_CRTC_SEQUENCE_NEXT_ON_MISS) && drm_vblank_passed(seq, req_seq))
req_seq = seq + 1;
e->pipe = pipe;
@ -2109,7 +2113,7 @@ int drm_crtc_queue_sequence_ioctl(struct drm_device *dev, void *data,
e->event.base.length = sizeof(e->event.seq);
e->event.seq.user_data = queue_seq->user_data;
spin_lock_irqsave(&dev->event_lock, spin_flags);
spin_lock_irq(&dev->event_lock);
/*
* drm_crtc_vblank_off() might have been called after we called
@ -2130,7 +2134,7 @@ int drm_crtc_queue_sequence_ioctl(struct drm_device *dev, void *data,
e->sequence = req_seq;
if (vblank_passed(seq, req_seq)) {
if (drm_vblank_passed(seq, req_seq)) {
drm_crtc_vblank_put(crtc);
send_vblank_event(dev, e, seq, now);
queue_seq->sequence = seq;
@ -2140,13 +2144,14 @@ int drm_crtc_queue_sequence_ioctl(struct drm_device *dev, void *data,
queue_seq->sequence = req_seq;
}
spin_unlock_irqrestore(&dev->event_lock, spin_flags);
spin_unlock_irq(&dev->event_lock);
return 0;
err_unlock:
spin_unlock_irqrestore(&dev->event_lock, spin_flags);
spin_unlock_irq(&dev->event_lock);
drm_crtc_vblank_put(crtc);
err_free:
kfree(e);
return ret;
}

View File

@ -0,0 +1,267 @@
// SPDX-License-Identifier: MIT
#include <uapi/linux/sched/types.h>
#include <drm/drm_print.h>
#include <drm/drm_vblank.h>
#include <drm/drm_vblank_work.h>
#include <drm/drm_crtc.h>
#include "drm_internal.h"
/**
* DOC: vblank works
*
* Many DRM drivers need to program hardware in a time-sensitive manner, many
* times with a deadline of starting and finishing within a certain region of
* the scanout. Most of the time the safest way to accomplish this is to
* simply do said time-sensitive programming in the driver's IRQ handler,
* which allows drivers to avoid being preempted during these critical
* regions. Or even better, the hardware may even handle applying such
* time-critical programming independently of the CPU.
*
* While there's a decent amount of hardware that's designed so that the CPU
* doesn't need to be concerned with extremely time-sensitive programming,
* there's a few situations where it can't be helped. Some unforgiving
* hardware may require that certain time-sensitive programming be handled
* completely by the CPU, and said programming may even take too long to
* handle in an IRQ handler. Another such situation would be where the driver
* needs to perform a task that needs to complete within a specific scanout
* period, but might possibly block and thus cannot be handled in an IRQ
* context. Both of these situations can't be solved perfectly in Linux since
* we're not a realtime kernel, and thus the scheduler may cause us to miss
* our deadline if it decides to preempt us. But for some drivers, it's good
* enough if we can lower our chance of being preempted to an absolute
* minimum.
*
* This is where &drm_vblank_work comes in. &drm_vblank_work provides a simple
* generic delayed work implementation which delays work execution until a
* particular vblank has passed, and then executes the work at realtime
* priority. This provides the best possible chance at performing
* time-sensitive hardware programming on time, even when the system is under
* heavy load. &drm_vblank_work also supports rescheduling, so that self
* re-arming work items can be easily implemented.
*/
void drm_handle_vblank_works(struct drm_vblank_crtc *vblank)
{
struct drm_vblank_work *work, *next;
u64 count = atomic64_read(&vblank->count);
bool wake = false;
assert_spin_locked(&vblank->dev->event_lock);
list_for_each_entry_safe(work, next, &vblank->pending_work, node) {
if (!drm_vblank_passed(count, work->count))
continue;
list_del_init(&work->node);
drm_vblank_put(vblank->dev, vblank->pipe);
kthread_queue_work(vblank->worker, &work->base);
wake = true;
}
if (wake)
wake_up_all(&vblank->work_wait_queue);
}
/* Handle cancelling any pending vblank work items and drop respective vblank
* references in response to vblank interrupts being disabled.
*/
void drm_vblank_cancel_pending_works(struct drm_vblank_crtc *vblank)
{
struct drm_vblank_work *work, *next;
assert_spin_locked(&vblank->dev->event_lock);
list_for_each_entry_safe(work, next, &vblank->pending_work, node) {
list_del_init(&work->node);
drm_vblank_put(vblank->dev, vblank->pipe);
}
wake_up_all(&vblank->work_wait_queue);
}
/**
* drm_vblank_work_schedule - schedule a vblank work
* @work: vblank work to schedule
* @count: target vblank count
* @nextonmiss: defer until the next vblank if target vblank was missed
*
* Schedule @work for execution once the crtc vblank count reaches @count.
*
* If the crtc vblank count has already reached @count and @nextonmiss is
* %false the work starts to execute immediately.
*
* If the crtc vblank count has already reached @count and @nextonmiss is
* %true the work is deferred until the next vblank (as if @count has been
* specified as crtc vblank count + 1).
*
* If @work is already scheduled, this function will reschedule said work
* using the new @count. This can be used for self-rearming work items.
*
* Returns:
* %1 if @work was successfully (re)scheduled, %0 if it was either already
* scheduled or cancelled, or a negative error code on failure.
*/
int drm_vblank_work_schedule(struct drm_vblank_work *work,
u64 count, bool nextonmiss)
{
struct drm_vblank_crtc *vblank = work->vblank;
struct drm_device *dev = vblank->dev;
u64 cur_vbl;
unsigned long irqflags;
bool passed, inmodeset, rescheduling = false, wake = false;
int ret = 0;
spin_lock_irqsave(&dev->event_lock, irqflags);
if (work->cancelling)
goto out;
spin_lock(&dev->vbl_lock);
inmodeset = vblank->inmodeset;
spin_unlock(&dev->vbl_lock);
if (inmodeset)
goto out;
if (list_empty(&work->node)) {
ret = drm_vblank_get(dev, vblank->pipe);
if (ret < 0)
goto out;
} else if (work->count == count) {
/* Already scheduled w/ same vbl count */
goto out;
} else {
rescheduling = true;
}
work->count = count;
cur_vbl = drm_vblank_count(dev, vblank->pipe);
passed = drm_vblank_passed(cur_vbl, count);
if (passed)
drm_dbg_core(dev,
"crtc %d vblank %llu already passed (current %llu)\n",
vblank->pipe, count, cur_vbl);
if (!nextonmiss && passed) {
drm_vblank_put(dev, vblank->pipe);
ret = kthread_queue_work(vblank->worker, &work->base);
if (rescheduling) {
list_del_init(&work->node);
wake = true;
}
} else {
if (!rescheduling)
list_add_tail(&work->node, &vblank->pending_work);
ret = true;
}
out:
spin_unlock_irqrestore(&dev->event_lock, irqflags);
if (wake)
wake_up_all(&vblank->work_wait_queue);
return ret;
}
EXPORT_SYMBOL(drm_vblank_work_schedule);
/**
* drm_vblank_work_cancel_sync - cancel a vblank work and wait for it to
* finish executing
* @work: vblank work to cancel
*
* Cancel an already scheduled vblank work and wait for its
* execution to finish.
*
* On return, @work is guaranteed to no longer be scheduled or running, even
* if it's self-arming.
*
* Returns:
* %True if the work was cancelled before it started to execute, %false
* otherwise.
*/
bool drm_vblank_work_cancel_sync(struct drm_vblank_work *work)
{
struct drm_vblank_crtc *vblank = work->vblank;
struct drm_device *dev = vblank->dev;
bool ret = false;
spin_lock_irq(&dev->event_lock);
if (!list_empty(&work->node)) {
list_del_init(&work->node);
drm_vblank_put(vblank->dev, vblank->pipe);
ret = true;
}
work->cancelling++;
spin_unlock_irq(&dev->event_lock);
wake_up_all(&vblank->work_wait_queue);
if (kthread_cancel_work_sync(&work->base))
ret = true;
spin_lock_irq(&dev->event_lock);
work->cancelling--;
spin_unlock_irq(&dev->event_lock);
return ret;
}
EXPORT_SYMBOL(drm_vblank_work_cancel_sync);
/**
* drm_vblank_work_flush - wait for a scheduled vblank work to finish
* executing
* @work: vblank work to flush
*
* Wait until @work has finished executing once.
*/
void drm_vblank_work_flush(struct drm_vblank_work *work)
{
struct drm_vblank_crtc *vblank = work->vblank;
struct drm_device *dev = vblank->dev;
spin_lock_irq(&dev->event_lock);
wait_event_lock_irq(vblank->work_wait_queue, list_empty(&work->node),
dev->event_lock);
spin_unlock_irq(&dev->event_lock);
kthread_flush_work(&work->base);
}
EXPORT_SYMBOL(drm_vblank_work_flush);
/**
* drm_vblank_work_init - initialize a vblank work item
* @work: vblank work item
* @crtc: CRTC whose vblank will trigger the work execution
* @func: work function to be executed
*
* Initialize a vblank work item for a specific crtc.
*/
void drm_vblank_work_init(struct drm_vblank_work *work, struct drm_crtc *crtc,
void (*func)(struct kthread_work *work))
{
kthread_init_work(&work->base, func);
INIT_LIST_HEAD(&work->node);
work->vblank = &crtc->dev->vblank[drm_crtc_index(crtc)];
}
EXPORT_SYMBOL(drm_vblank_work_init);
int drm_vblank_worker_init(struct drm_vblank_crtc *vblank)
{
struct sched_param param = {
.sched_priority = MAX_RT_PRIO - 1,
};
struct kthread_worker *worker;
INIT_LIST_HEAD(&vblank->pending_work);
init_waitqueue_head(&vblank->work_wait_queue);
worker = kthread_create_worker(0, "card%d-crtc%d",
vblank->dev->primary->index,
vblank->pipe);
if (IS_ERR(worker))
return PTR_ERR(worker);
vblank->worker = worker;
return sched_setscheduler(vblank->worker->task, SCHED_FIFO, &param);
}

View File

@ -220,9 +220,9 @@ static int i810_dma_cleanup(struct drm_device *dev)
if (dev_priv->ring.virtual_start)
drm_legacy_ioremapfree(&dev_priv->ring.map, dev);
if (dev_priv->hw_status_page) {
pci_free_consistent(dev->pdev, PAGE_SIZE,
dev_priv->hw_status_page,
dev_priv->dma_status_page);
dma_free_coherent(&dev->pdev->dev, PAGE_SIZE,
dev_priv->hw_status_page,
dev_priv->dma_status_page);
}
kfree(dev->dev_private);
dev->dev_private = NULL;
@ -398,8 +398,8 @@ static int i810_dma_initialize(struct drm_device *dev,
/* Program Hardware Status Page */
dev_priv->hw_status_page =
pci_zalloc_consistent(dev->pdev, PAGE_SIZE,
&dev_priv->dma_status_page);
dma_alloc_coherent(&dev->pdev->dev, PAGE_SIZE,
&dev_priv->dma_status_page, GFP_KERNEL);
if (!dev_priv->hw_status_page) {
dev->dev_private = (void *)dev_priv;
i810_dma_cleanup(dev);

View File

@ -14,3 +14,14 @@ config DRM_INGENIC
Choose this option for DRM support for the Ingenic SoCs.
If M is selected the module will be called ingenic-drm.
if DRM_INGENIC
config DRM_INGENIC_IPU
bool "IPU support for Ingenic SoCs"
help
Choose this option to enable support for the IPU found in Ingenic SoCs.
The Image Processing Unit (IPU) will appear as a second primary plane.
endif

View File

@ -1 +1,3 @@
obj-$(CONFIG_DRM_INGENIC) += ingenic-drm.o
ingenic-drm-y = ingenic-drm-drv.o
ingenic-drm-$(CONFIG_DRM_INGENIC_IPU) += ingenic-ipu.o

View File

@ -4,6 +4,9 @@
//
// Copyright (C) 2019, Paul Cercueil <paul@crapouillou.net>
#include "ingenic-drm.h"
#include <linux/component.h>
#include <linux/clk.h>
#include <linux/dma-mapping.h>
#include <linux/module.h>
@ -32,120 +35,6 @@
#include <drm/drm_simple_kms_helper.h>
#include <drm/drm_vblank.h>
#define JZ_REG_LCD_CFG 0x00
#define JZ_REG_LCD_VSYNC 0x04
#define JZ_REG_LCD_HSYNC 0x08
#define JZ_REG_LCD_VAT 0x0C
#define JZ_REG_LCD_DAH 0x10
#define JZ_REG_LCD_DAV 0x14
#define JZ_REG_LCD_PS 0x18
#define JZ_REG_LCD_CLS 0x1C
#define JZ_REG_LCD_SPL 0x20
#define JZ_REG_LCD_REV 0x24
#define JZ_REG_LCD_CTRL 0x30
#define JZ_REG_LCD_STATE 0x34
#define JZ_REG_LCD_IID 0x38
#define JZ_REG_LCD_DA0 0x40
#define JZ_REG_LCD_SA0 0x44
#define JZ_REG_LCD_FID0 0x48
#define JZ_REG_LCD_CMD0 0x4C
#define JZ_REG_LCD_DA1 0x50
#define JZ_REG_LCD_SA1 0x54
#define JZ_REG_LCD_FID1 0x58
#define JZ_REG_LCD_CMD1 0x5C
#define JZ_LCD_CFG_SLCD BIT(31)
#define JZ_LCD_CFG_PS_DISABLE BIT(23)
#define JZ_LCD_CFG_CLS_DISABLE BIT(22)
#define JZ_LCD_CFG_SPL_DISABLE BIT(21)
#define JZ_LCD_CFG_REV_DISABLE BIT(20)
#define JZ_LCD_CFG_HSYNCM BIT(19)
#define JZ_LCD_CFG_PCLKM BIT(18)
#define JZ_LCD_CFG_INV BIT(17)
#define JZ_LCD_CFG_SYNC_DIR BIT(16)
#define JZ_LCD_CFG_PS_POLARITY BIT(15)
#define JZ_LCD_CFG_CLS_POLARITY BIT(14)
#define JZ_LCD_CFG_SPL_POLARITY BIT(13)
#define JZ_LCD_CFG_REV_POLARITY BIT(12)
#define JZ_LCD_CFG_HSYNC_ACTIVE_LOW BIT(11)
#define JZ_LCD_CFG_PCLK_FALLING_EDGE BIT(10)
#define JZ_LCD_CFG_DE_ACTIVE_LOW BIT(9)
#define JZ_LCD_CFG_VSYNC_ACTIVE_LOW BIT(8)
#define JZ_LCD_CFG_18_BIT BIT(7)
#define JZ_LCD_CFG_PDW (BIT(5) | BIT(4))
#define JZ_LCD_CFG_MODE_GENERIC_16BIT 0
#define JZ_LCD_CFG_MODE_GENERIC_18BIT BIT(7)
#define JZ_LCD_CFG_MODE_GENERIC_24BIT BIT(6)
#define JZ_LCD_CFG_MODE_SPECIAL_TFT_1 1
#define JZ_LCD_CFG_MODE_SPECIAL_TFT_2 2
#define JZ_LCD_CFG_MODE_SPECIAL_TFT_3 3
#define JZ_LCD_CFG_MODE_TV_OUT_P 4
#define JZ_LCD_CFG_MODE_TV_OUT_I 6
#define JZ_LCD_CFG_MODE_SINGLE_COLOR_STN 8
#define JZ_LCD_CFG_MODE_SINGLE_MONOCHROME_STN 9
#define JZ_LCD_CFG_MODE_DUAL_COLOR_STN 10
#define JZ_LCD_CFG_MODE_DUAL_MONOCHROME_STN 11
#define JZ_LCD_CFG_MODE_8BIT_SERIAL 12
#define JZ_LCD_CFG_MODE_LCM 13
#define JZ_LCD_VSYNC_VPS_OFFSET 16
#define JZ_LCD_VSYNC_VPE_OFFSET 0
#define JZ_LCD_HSYNC_HPS_OFFSET 16
#define JZ_LCD_HSYNC_HPE_OFFSET 0
#define JZ_LCD_VAT_HT_OFFSET 16
#define JZ_LCD_VAT_VT_OFFSET 0
#define JZ_LCD_DAH_HDS_OFFSET 16
#define JZ_LCD_DAH_HDE_OFFSET 0
#define JZ_LCD_DAV_VDS_OFFSET 16
#define JZ_LCD_DAV_VDE_OFFSET 0
#define JZ_LCD_CTRL_BURST_4 (0x0 << 28)
#define JZ_LCD_CTRL_BURST_8 (0x1 << 28)
#define JZ_LCD_CTRL_BURST_16 (0x2 << 28)
#define JZ_LCD_CTRL_RGB555 BIT(27)
#define JZ_LCD_CTRL_OFUP BIT(26)
#define JZ_LCD_CTRL_FRC_GRAYSCALE_16 (0x0 << 24)
#define JZ_LCD_CTRL_FRC_GRAYSCALE_4 (0x1 << 24)
#define JZ_LCD_CTRL_FRC_GRAYSCALE_2 (0x2 << 24)
#define JZ_LCD_CTRL_PDD_MASK (0xff << 16)
#define JZ_LCD_CTRL_EOF_IRQ BIT(13)
#define JZ_LCD_CTRL_SOF_IRQ BIT(12)
#define JZ_LCD_CTRL_OFU_IRQ BIT(11)
#define JZ_LCD_CTRL_IFU0_IRQ BIT(10)
#define JZ_LCD_CTRL_IFU1_IRQ BIT(9)
#define JZ_LCD_CTRL_DD_IRQ BIT(8)
#define JZ_LCD_CTRL_QDD_IRQ BIT(7)
#define JZ_LCD_CTRL_REVERSE_ENDIAN BIT(6)
#define JZ_LCD_CTRL_LSB_FISRT BIT(5)
#define JZ_LCD_CTRL_DISABLE BIT(4)
#define JZ_LCD_CTRL_ENABLE BIT(3)
#define JZ_LCD_CTRL_BPP_1 0x0
#define JZ_LCD_CTRL_BPP_2 0x1
#define JZ_LCD_CTRL_BPP_4 0x2
#define JZ_LCD_CTRL_BPP_8 0x3
#define JZ_LCD_CTRL_BPP_15_16 0x4
#define JZ_LCD_CTRL_BPP_18_24 0x5
#define JZ_LCD_CTRL_BPP_MASK (JZ_LCD_CTRL_RGB555 | (0x7 << 0))
#define JZ_LCD_CMD_SOF_IRQ BIT(31)
#define JZ_LCD_CMD_EOF_IRQ BIT(30)
#define JZ_LCD_CMD_ENABLE_PAL BIT(28)
#define JZ_LCD_SYNC_MASK 0x3ff
#define JZ_LCD_STATE_EOF_IRQ BIT(5)
#define JZ_LCD_STATE_SOF_IRQ BIT(4)
#define JZ_LCD_STATE_DISABLED BIT(0)
struct ingenic_dma_hwdesc {
u32 next;
u32 addr;
@ -155,24 +44,30 @@ struct ingenic_dma_hwdesc {
struct jz_soc_info {
bool needs_dev_clk;
bool has_osd;
unsigned int max_width, max_height;
};
struct ingenic_drm {
struct drm_device drm;
struct drm_plane primary;
/*
* f1 (aka. foreground1) is our primary plane, on top of which
* f0 (aka. foreground0) can be overlayed. Z-order is fixed in
* hardware and cannot be changed.
*/
struct drm_plane f0, f1, *ipu_plane;
struct drm_crtc crtc;
struct drm_encoder encoder;
struct device *dev;
struct regmap *map;
struct clk *lcd_clk, *pix_clk;
const struct jz_soc_info *soc_info;
struct ingenic_dma_hwdesc *dma_hwdesc;
dma_addr_t dma_hwdesc_phys;
struct ingenic_dma_hwdesc *dma_hwdesc_f0, *dma_hwdesc_f1;
dma_addr_t dma_hwdesc_phys_f0, dma_hwdesc_phys_f1;
bool panel_is_sharp;
bool no_vblank;
};
static const u32 ingenic_drm_primary_formats[] = {
@ -202,7 +97,7 @@ static const struct regmap_config ingenic_drm_regmap_config = {
.val_bits = 32,
.reg_stride = 4,
.max_register = JZ_REG_LCD_CMD1,
.max_register = JZ_REG_LCD_SIZE1,
.writeable_reg = ingenic_drm_writeable_reg,
};
@ -216,17 +111,6 @@ static inline struct ingenic_drm *drm_crtc_get_priv(struct drm_crtc *crtc)
return container_of(crtc, struct ingenic_drm, crtc);
}
static inline struct ingenic_drm *
drm_encoder_get_priv(struct drm_encoder *encoder)
{
return container_of(encoder, struct ingenic_drm, encoder);
}
static inline struct ingenic_drm *drm_plane_get_priv(struct drm_plane *plane)
{
return container_of(plane, struct ingenic_drm, primary);
}
static void ingenic_drm_crtc_atomic_enable(struct drm_crtc *crtc,
struct drm_crtc_state *state)
{
@ -297,34 +181,24 @@ static void ingenic_drm_crtc_update_timings(struct ingenic_drm *priv,
regmap_write(priv->map, JZ_REG_LCD_SPL, hpe << 16 | (hpe + 1));
regmap_write(priv->map, JZ_REG_LCD_REV, mode->htotal << 16);
}
}
static void ingenic_drm_crtc_update_ctrl(struct ingenic_drm *priv,
const struct drm_format_info *finfo)
{
unsigned int ctrl = JZ_LCD_CTRL_OFUP | JZ_LCD_CTRL_BURST_16;
regmap_set_bits(priv->map, JZ_REG_LCD_CTRL,
JZ_LCD_CTRL_OFUP | JZ_LCD_CTRL_BURST_16);
switch (finfo->format) {
case DRM_FORMAT_XRGB1555:
ctrl |= JZ_LCD_CTRL_RGB555;
/* fall-through */
case DRM_FORMAT_RGB565:
ctrl |= JZ_LCD_CTRL_BPP_15_16;
break;
case DRM_FORMAT_XRGB8888:
ctrl |= JZ_LCD_CTRL_BPP_18_24;
break;
}
regmap_update_bits(priv->map, JZ_REG_LCD_CTRL,
JZ_LCD_CTRL_OFUP | JZ_LCD_CTRL_BURST_16 |
JZ_LCD_CTRL_BPP_MASK, ctrl);
/*
* IPU restart - specify how much time the LCDC will wait before
* transferring a new frame from the IPU. The value is the one
* suggested in the programming manual.
*/
regmap_write(priv->map, JZ_REG_LCD_IPUR, JZ_LCD_IPUR_IPUREN |
(ht * vpe / 3) << JZ_LCD_IPUR_IPUR_LSB);
}
static int ingenic_drm_crtc_atomic_check(struct drm_crtc *crtc,
struct drm_crtc_state *state)
{
struct ingenic_drm *priv = drm_crtc_get_priv(crtc);
struct drm_plane_state *f1_state, *f0_state, *ipu_state = NULL;
long rate;
if (!drm_atomic_crtc_needs_modeset(state))
@ -339,27 +213,59 @@ static int ingenic_drm_crtc_atomic_check(struct drm_crtc *crtc,
if (rate < 0)
return rate;
if (priv->soc_info->has_osd) {
f1_state = drm_atomic_get_plane_state(state->state, &priv->f1);
f0_state = drm_atomic_get_plane_state(state->state, &priv->f0);
if (IS_ENABLED(CONFIG_DRM_INGENIC_IPU) && priv->ipu_plane) {
ipu_state = drm_atomic_get_plane_state(state->state, priv->ipu_plane);
/* IPU and F1 planes cannot be enabled at the same time. */
if (f1_state->fb && ipu_state->fb) {
dev_dbg(priv->dev, "Cannot enable both F1 and IPU\n");
return -EINVAL;
}
}
/* If all the planes are disabled, we won't get a VBLANK IRQ */
priv->no_vblank = !f1_state->fb && !f0_state->fb &&
!(ipu_state && ipu_state->fb);
}
return 0;
}
static void ingenic_drm_crtc_atomic_begin(struct drm_crtc *crtc,
struct drm_crtc_state *oldstate)
{
struct ingenic_drm *priv = drm_crtc_get_priv(crtc);
u32 ctrl = 0;
if (priv->soc_info->has_osd &&
drm_atomic_crtc_needs_modeset(crtc->state)) {
/*
* If IPU plane is enabled, enable IPU as source for the F1
* plane; otherwise use regular DMA.
*/
if (priv->ipu_plane && priv->ipu_plane->state->fb)
ctrl |= JZ_LCD_OSDCTRL_IPU;
regmap_update_bits(priv->map, JZ_REG_LCD_OSDCTRL,
JZ_LCD_OSDCTRL_IPU, ctrl);
}
}
static void ingenic_drm_crtc_atomic_flush(struct drm_crtc *crtc,
struct drm_crtc_state *oldstate)
{
struct ingenic_drm *priv = drm_crtc_get_priv(crtc);
struct drm_crtc_state *state = crtc->state;
struct drm_pending_vblank_event *event = state->event;
struct drm_framebuffer *drm_fb = crtc->primary->state->fb;
const struct drm_format_info *finfo;
if (drm_atomic_crtc_needs_modeset(state)) {
finfo = drm_format_info(drm_fb->format->format);
ingenic_drm_crtc_update_timings(priv, &state->mode);
ingenic_drm_crtc_update_ctrl(priv, finfo);
clk_set_rate(priv->pix_clk, state->adjusted_mode.clock * 1000);
regmap_write(priv->map, JZ_REG_LCD_DA0, priv->dma_hwdesc->next);
}
if (event) {
@ -374,11 +280,160 @@ static void ingenic_drm_crtc_atomic_flush(struct drm_crtc *crtc,
}
}
static int ingenic_drm_plane_atomic_check(struct drm_plane *plane,
struct drm_plane_state *state)
{
struct ingenic_drm *priv = drm_device_get_priv(plane->dev);
struct drm_crtc_state *crtc_state;
struct drm_crtc *crtc = state->crtc ?: plane->state->crtc;
int ret;
if (!crtc)
return 0;
crtc_state = drm_atomic_get_existing_crtc_state(state->state, crtc);
if (WARN_ON(!crtc_state))
return -EINVAL;
ret = drm_atomic_helper_check_plane_state(state, crtc_state,
DRM_PLANE_HELPER_NO_SCALING,
DRM_PLANE_HELPER_NO_SCALING,
priv->soc_info->has_osd,
true);
if (ret)
return ret;
/*
* If OSD is not available, check that the width/height match.
* Note that state->src_* are in 16.16 fixed-point format.
*/
if (!priv->soc_info->has_osd &&
(state->src_x != 0 ||
(state->src_w >> 16) != state->crtc_w ||
(state->src_h >> 16) != state->crtc_h))
return -EINVAL;
/*
* Require full modeset if enabling or disabling a plane, or changing
* its position, size or depth.
*/
if (priv->soc_info->has_osd &&
(!plane->state->fb || !state->fb ||
plane->state->crtc_x != state->crtc_x ||
plane->state->crtc_y != state->crtc_y ||
plane->state->crtc_w != state->crtc_w ||
plane->state->crtc_h != state->crtc_h ||
plane->state->fb->format->format != state->fb->format->format))
crtc_state->mode_changed = true;
return 0;
}
static void ingenic_drm_plane_enable(struct ingenic_drm *priv,
struct drm_plane *plane)
{
unsigned int en_bit;
if (priv->soc_info->has_osd) {
if (plane->type == DRM_PLANE_TYPE_PRIMARY)
en_bit = JZ_LCD_OSDC_F1EN;
else
en_bit = JZ_LCD_OSDC_F0EN;
regmap_set_bits(priv->map, JZ_REG_LCD_OSDC, en_bit);
}
}
void ingenic_drm_plane_disable(struct device *dev, struct drm_plane *plane)
{
struct ingenic_drm *priv = dev_get_drvdata(dev);
unsigned int en_bit;
if (priv->soc_info->has_osd) {
if (plane->type == DRM_PLANE_TYPE_PRIMARY)
en_bit = JZ_LCD_OSDC_F1EN;
else
en_bit = JZ_LCD_OSDC_F0EN;
regmap_clear_bits(priv->map, JZ_REG_LCD_OSDC, en_bit);
}
}
static void ingenic_drm_plane_atomic_disable(struct drm_plane *plane,
struct drm_plane_state *old_state)
{
struct ingenic_drm *priv = drm_device_get_priv(plane->dev);
ingenic_drm_plane_disable(priv->dev, plane);
}
void ingenic_drm_plane_config(struct device *dev,
struct drm_plane *plane, u32 fourcc)
{
struct ingenic_drm *priv = dev_get_drvdata(dev);
struct drm_plane_state *state = plane->state;
unsigned int xy_reg, size_reg;
unsigned int ctrl = 0;
ingenic_drm_plane_enable(priv, plane);
if (priv->soc_info->has_osd &&
plane->type == DRM_PLANE_TYPE_PRIMARY) {
switch (fourcc) {
case DRM_FORMAT_XRGB1555:
ctrl |= JZ_LCD_OSDCTRL_RGB555;
fallthrough;
case DRM_FORMAT_RGB565:
ctrl |= JZ_LCD_OSDCTRL_BPP_15_16;
break;
case DRM_FORMAT_XRGB8888:
ctrl |= JZ_LCD_OSDCTRL_BPP_18_24;
break;
}
regmap_update_bits(priv->map, JZ_REG_LCD_OSDCTRL,
JZ_LCD_OSDCTRL_BPP_MASK, ctrl);
} else {
switch (fourcc) {
case DRM_FORMAT_XRGB1555:
ctrl |= JZ_LCD_CTRL_RGB555;
fallthrough;
case DRM_FORMAT_RGB565:
ctrl |= JZ_LCD_CTRL_BPP_15_16;
break;
case DRM_FORMAT_XRGB8888:
ctrl |= JZ_LCD_CTRL_BPP_18_24;
break;
}
regmap_update_bits(priv->map, JZ_REG_LCD_CTRL,
JZ_LCD_CTRL_BPP_MASK, ctrl);
}
if (priv->soc_info->has_osd) {
if (plane->type == DRM_PLANE_TYPE_PRIMARY) {
xy_reg = JZ_REG_LCD_XYP1;
size_reg = JZ_REG_LCD_SIZE1;
} else {
xy_reg = JZ_REG_LCD_XYP0;
size_reg = JZ_REG_LCD_SIZE0;
}
regmap_write(priv->map, xy_reg,
state->crtc_x << JZ_LCD_XYP01_XPOS_LSB |
state->crtc_y << JZ_LCD_XYP01_YPOS_LSB);
regmap_write(priv->map, size_reg,
state->crtc_w << JZ_LCD_SIZE01_WIDTH_LSB |
state->crtc_h << JZ_LCD_SIZE01_HEIGHT_LSB);
}
}
static void ingenic_drm_plane_atomic_update(struct drm_plane *plane,
struct drm_plane_state *oldstate)
{
struct ingenic_drm *priv = drm_plane_get_priv(plane);
struct ingenic_drm *priv = drm_device_get_priv(plane->dev);
struct drm_plane_state *state = plane->state;
struct ingenic_dma_hwdesc *hwdesc;
unsigned int width, height, cpp;
dma_addr_t addr;
@ -386,11 +441,19 @@ static void ingenic_drm_plane_atomic_update(struct drm_plane *plane,
addr = drm_fb_cma_get_gem_addr(state->fb, state, 0);
width = state->src_w >> 16;
height = state->src_h >> 16;
cpp = state->fb->format->cpp[plane->index];
cpp = state->fb->format->cpp[0];
priv->dma_hwdesc->addr = addr;
priv->dma_hwdesc->cmd = width * height * cpp / 4;
priv->dma_hwdesc->cmd |= JZ_LCD_CMD_EOF_IRQ;
if (priv->soc_info->has_osd && plane->type == DRM_PLANE_TYPE_OVERLAY)
hwdesc = priv->dma_hwdesc_f0;
else
hwdesc = priv->dma_hwdesc_f1;
hwdesc->addr = addr;
hwdesc->cmd = JZ_LCD_CMD_EOF_IRQ | (width * height * cpp / 4);
if (drm_atomic_crtc_needs_modeset(state->crtc->state))
ingenic_drm_plane_config(priv->dev, plane,
state->fb->format->format);
}
}
@ -398,7 +461,7 @@ static void ingenic_drm_encoder_atomic_mode_set(struct drm_encoder *encoder,
struct drm_crtc_state *crtc_state,
struct drm_connector_state *conn_state)
{
struct ingenic_drm *priv = drm_encoder_get_priv(encoder);
struct ingenic_drm *priv = drm_device_get_priv(encoder->dev);
struct drm_display_mode *mode = &crtc_state->adjusted_mode;
struct drm_connector *conn = conn_state->connector;
struct drm_display_info *info = &conn->display_info;
@ -474,6 +537,29 @@ static int ingenic_drm_encoder_atomic_check(struct drm_encoder *encoder,
}
}
static void ingenic_drm_atomic_helper_commit_tail(struct drm_atomic_state *old_state)
{
/*
* Just your regular drm_atomic_helper_commit_tail(), but only calls
* drm_atomic_helper_wait_for_vblanks() if priv->no_vblank.
*/
struct drm_device *dev = old_state->dev;
struct ingenic_drm *priv = drm_device_get_priv(dev);
drm_atomic_helper_commit_modeset_disables(dev, old_state);
drm_atomic_helper_commit_planes(dev, old_state, 0);
drm_atomic_helper_commit_modeset_enables(dev, old_state);
drm_atomic_helper_commit_hw_done(old_state);
if (!priv->no_vblank)
drm_atomic_helper_wait_for_vblanks(dev, old_state);
drm_atomic_helper_cleanup_planes(dev, old_state);
}
static irqreturn_t ingenic_drm_irq_handler(int irq, void *arg)
{
struct ingenic_drm *priv = drm_device_get_priv(arg);
@ -513,9 +599,9 @@ static struct drm_driver ingenic_drm_driver_data = {
.driver_features = DRIVER_MODESET | DRIVER_GEM | DRIVER_ATOMIC,
.name = "ingenic-drm",
.desc = "DRM module for Ingenic SoCs",
.date = "20190422",
.date = "20200716",
.major = 1,
.minor = 0,
.minor = 1,
.patchlevel = 0,
.fops = &ingenic_drm_fops,
@ -551,12 +637,15 @@ static const struct drm_crtc_funcs ingenic_drm_crtc_funcs = {
static const struct drm_plane_helper_funcs ingenic_drm_plane_helper_funcs = {
.atomic_update = ingenic_drm_plane_atomic_update,
.atomic_check = ingenic_drm_plane_atomic_check,
.atomic_disable = ingenic_drm_plane_atomic_disable,
.prepare_fb = drm_gem_fb_prepare_fb,
};
static const struct drm_crtc_helper_funcs ingenic_drm_crtc_helper_funcs = {
.atomic_enable = ingenic_drm_crtc_atomic_enable,
.atomic_disable = ingenic_drm_crtc_atomic_disable,
.atomic_begin = ingenic_drm_crtc_atomic_begin,
.atomic_flush = ingenic_drm_crtc_atomic_flush,
.atomic_check = ingenic_drm_crtc_atomic_check,
};
@ -573,25 +662,30 @@ static const struct drm_mode_config_funcs ingenic_drm_mode_config_funcs = {
.atomic_commit = drm_atomic_helper_commit,
};
static void ingenic_drm_free_dma_hwdesc(void *d)
static struct drm_mode_config_helper_funcs ingenic_drm_mode_config_helpers = {
.atomic_commit_tail = ingenic_drm_atomic_helper_commit_tail,
};
static void ingenic_drm_unbind_all(void *d)
{
struct ingenic_drm *priv = d;
dma_free_coherent(priv->dev, sizeof(*priv->dma_hwdesc),
priv->dma_hwdesc, priv->dma_hwdesc_phys);
component_unbind_all(priv->dev, &priv->drm);
}
static int ingenic_drm_probe(struct platform_device *pdev)
static int ingenic_drm_bind(struct device *dev)
{
struct platform_device *pdev = to_platform_device(dev);
const struct jz_soc_info *soc_info;
struct device *dev = &pdev->dev;
struct ingenic_drm *priv;
struct clk *parent_clk;
struct drm_bridge *bridge;
struct drm_panel *panel;
struct drm_encoder *encoder;
struct drm_device *drm;
void __iomem *base;
long parent_rate;
unsigned int i, clone_mask = 0;
int ret, irq;
soc_info = of_device_get_match_data(dev);
@ -620,17 +714,18 @@ static int ingenic_drm_probe(struct platform_device *pdev)
drm->mode_config.max_width = soc_info->max_width;
drm->mode_config.max_height = 4095;
drm->mode_config.funcs = &ingenic_drm_mode_config_funcs;
drm->mode_config.helper_private = &ingenic_drm_mode_config_helpers;
base = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(base)) {
dev_err(dev, "Failed to get memory resource");
dev_err(dev, "Failed to get memory resource\n");
return PTR_ERR(base);
}
priv->map = devm_regmap_init_mmio(dev, base,
&ingenic_drm_regmap_config);
if (IS_ERR(priv->map)) {
dev_err(dev, "Failed to create regmap");
dev_err(dev, "Failed to create regmap\n");
return PTR_ERR(priv->map);
}
@ -641,89 +736,150 @@ static int ingenic_drm_probe(struct platform_device *pdev)
if (soc_info->needs_dev_clk) {
priv->lcd_clk = devm_clk_get(dev, "lcd");
if (IS_ERR(priv->lcd_clk)) {
dev_err(dev, "Failed to get lcd clock");
dev_err(dev, "Failed to get lcd clock\n");
return PTR_ERR(priv->lcd_clk);
}
}
priv->pix_clk = devm_clk_get(dev, "lcd_pclk");
if (IS_ERR(priv->pix_clk)) {
dev_err(dev, "Failed to get pixel clock");
dev_err(dev, "Failed to get pixel clock\n");
return PTR_ERR(priv->pix_clk);
}
ret = drm_of_find_panel_or_bridge(dev->of_node, 0, 0, &panel, &bridge);
if (ret) {
if (ret != -EPROBE_DEFER)
dev_err(dev, "Failed to get panel handle");
return ret;
}
if (panel)
bridge = devm_drm_panel_bridge_add_typed(dev, panel,
DRM_MODE_CONNECTOR_DPI);
priv->dma_hwdesc = dma_alloc_coherent(dev, sizeof(*priv->dma_hwdesc),
&priv->dma_hwdesc_phys,
GFP_KERNEL);
if (!priv->dma_hwdesc)
priv->dma_hwdesc_f1 = dmam_alloc_coherent(dev, sizeof(*priv->dma_hwdesc_f1),
&priv->dma_hwdesc_phys_f1,
GFP_KERNEL);
if (!priv->dma_hwdesc_f1)
return -ENOMEM;
ret = devm_add_action_or_reset(dev, ingenic_drm_free_dma_hwdesc, priv);
if (ret)
return ret;
priv->dma_hwdesc_f1->next = priv->dma_hwdesc_phys_f1;
priv->dma_hwdesc_f1->id = 0xf1;
priv->dma_hwdesc->next = priv->dma_hwdesc_phys;
priv->dma_hwdesc->id = 0xdeafbead;
if (priv->soc_info->has_osd) {
priv->dma_hwdesc_f0 = dmam_alloc_coherent(dev,
sizeof(*priv->dma_hwdesc_f0),
&priv->dma_hwdesc_phys_f0,
GFP_KERNEL);
if (!priv->dma_hwdesc_f0)
return -ENOMEM;
drm_plane_helper_add(&priv->primary, &ingenic_drm_plane_helper_funcs);
priv->dma_hwdesc_f0->next = priv->dma_hwdesc_phys_f0;
priv->dma_hwdesc_f0->id = 0xf0;
}
ret = drm_universal_plane_init(drm, &priv->primary,
0, &ingenic_drm_primary_plane_funcs,
if (soc_info->has_osd)
priv->ipu_plane = drm_plane_from_index(drm, 0);
drm_plane_helper_add(&priv->f1, &ingenic_drm_plane_helper_funcs);
ret = drm_universal_plane_init(drm, &priv->f1, 1,
&ingenic_drm_primary_plane_funcs,
ingenic_drm_primary_formats,
ARRAY_SIZE(ingenic_drm_primary_formats),
NULL, DRM_PLANE_TYPE_PRIMARY, NULL);
if (ret) {
dev_err(dev, "Failed to register primary plane: %i", ret);
dev_err(dev, "Failed to register plane: %i\n", ret);
return ret;
}
drm_crtc_helper_add(&priv->crtc, &ingenic_drm_crtc_helper_funcs);
ret = drm_crtc_init_with_planes(drm, &priv->crtc, &priv->primary,
ret = drm_crtc_init_with_planes(drm, &priv->crtc, &priv->f1,
NULL, &ingenic_drm_crtc_funcs, NULL);
if (ret) {
dev_err(dev, "Failed to init CRTC: %i", ret);
dev_err(dev, "Failed to init CRTC: %i\n", ret);
return ret;
}
priv->encoder.possible_crtcs = 1;
if (soc_info->has_osd) {
drm_plane_helper_add(&priv->f0,
&ingenic_drm_plane_helper_funcs);
drm_encoder_helper_add(&priv->encoder,
&ingenic_drm_encoder_helper_funcs);
ret = drm_universal_plane_init(drm, &priv->f0, 1,
&ingenic_drm_primary_plane_funcs,
ingenic_drm_primary_formats,
ARRAY_SIZE(ingenic_drm_primary_formats),
NULL, DRM_PLANE_TYPE_OVERLAY,
NULL);
if (ret) {
dev_err(dev, "Failed to register overlay plane: %i\n",
ret);
return ret;
}
ret = drm_simple_encoder_init(drm, &priv->encoder,
DRM_MODE_ENCODER_DPI);
if (ret) {
dev_err(dev, "Failed to init encoder: %i", ret);
return ret;
if (IS_ENABLED(CONFIG_DRM_INGENIC_IPU)) {
ret = component_bind_all(dev, drm);
if (ret) {
if (ret != -EPROBE_DEFER)
dev_err(dev, "Failed to bind components: %i\n", ret);
return ret;
}
ret = devm_add_action_or_reset(dev, ingenic_drm_unbind_all, priv);
if (ret)
return ret;
priv->ipu_plane = drm_plane_from_index(drm, 2);
if (!priv->ipu_plane) {
dev_err(dev, "Failed to retrieve IPU plane\n");
return -EINVAL;
}
}
}
ret = drm_bridge_attach(&priv->encoder, bridge, NULL, 0);
if (ret) {
dev_err(dev, "Unable to attach bridge");
return ret;
for (i = 0; ; i++) {
ret = drm_of_find_panel_or_bridge(dev->of_node, 0, i, &panel, &bridge);
if (ret) {
if (ret == -ENODEV)
break; /* we're done */
if (ret != -EPROBE_DEFER)
dev_err(dev, "Failed to get bridge handle\n");
return ret;
}
if (panel)
bridge = devm_drm_panel_bridge_add_typed(dev, panel,
DRM_MODE_CONNECTOR_DPI);
encoder = devm_kzalloc(dev, sizeof(*encoder), GFP_KERNEL);
if (!encoder)
return -ENOMEM;
encoder->possible_crtcs = 1;
drm_encoder_helper_add(encoder, &ingenic_drm_encoder_helper_funcs);
ret = drm_simple_encoder_init(drm, encoder, DRM_MODE_ENCODER_DPI);
if (ret) {
dev_err(dev, "Failed to init encoder: %d\n", ret);
return ret;
}
ret = drm_bridge_attach(encoder, bridge, NULL, 0);
if (ret) {
dev_err(dev, "Unable to attach bridge\n");
return ret;
}
}
drm_for_each_encoder(encoder, drm) {
clone_mask |= BIT(drm_encoder_index(encoder));
}
drm_for_each_encoder(encoder, drm) {
encoder->possible_clones = clone_mask;
}
ret = drm_irq_install(drm, irq);
if (ret) {
dev_err(dev, "Unable to install IRQ handler");
dev_err(dev, "Unable to install IRQ handler\n");
return ret;
}
ret = drm_vblank_init(drm, 1);
if (ret) {
dev_err(dev, "Failed calling drm_vblank_init()");
dev_err(dev, "Failed calling drm_vblank_init()\n");
return ret;
}
@ -731,7 +887,7 @@ static int ingenic_drm_probe(struct platform_device *pdev)
ret = clk_prepare_enable(priv->pix_clk);
if (ret) {
dev_err(dev, "Unable to start pixel clock");
dev_err(dev, "Unable to start pixel clock\n");
return ret;
}
@ -746,20 +902,28 @@ static int ingenic_drm_probe(struct platform_device *pdev)
*/
ret = clk_set_rate(priv->lcd_clk, parent_rate);
if (ret) {
dev_err(dev, "Unable to set LCD clock rate");
dev_err(dev, "Unable to set LCD clock rate\n");
goto err_pixclk_disable;
}
ret = clk_prepare_enable(priv->lcd_clk);
if (ret) {
dev_err(dev, "Unable to start lcd clock");
dev_err(dev, "Unable to start lcd clock\n");
goto err_pixclk_disable;
}
}
/* Set address of our DMA descriptor chain */
regmap_write(priv->map, JZ_REG_LCD_DA0, priv->dma_hwdesc_phys_f0);
regmap_write(priv->map, JZ_REG_LCD_DA1, priv->dma_hwdesc_phys_f1);
/* Enable OSD if available */
if (soc_info->has_osd)
regmap_write(priv->map, JZ_REG_LCD_OSDC, JZ_LCD_OSDC_OSDEN);
ret = drm_dev_register(drm, 0);
if (ret) {
dev_err(dev, "Failed to register DRM driver");
dev_err(dev, "Failed to register DRM driver\n");
goto err_devclk_disable;
}
@ -775,9 +939,14 @@ static int ingenic_drm_probe(struct platform_device *pdev)
return ret;
}
static int ingenic_drm_remove(struct platform_device *pdev)
static int compare_of(struct device *dev, void *data)
{
struct ingenic_drm *priv = platform_get_drvdata(pdev);
return dev->of_node == data;
}
static void ingenic_drm_unbind(struct device *dev)
{
struct ingenic_drm *priv = dev_get_drvdata(dev);
if (priv->lcd_clk)
clk_disable_unprepare(priv->lcd_clk);
@ -785,24 +954,63 @@ static int ingenic_drm_remove(struct platform_device *pdev)
drm_dev_unregister(&priv->drm);
drm_atomic_helper_shutdown(&priv->drm);
}
static const struct component_master_ops ingenic_master_ops = {
.bind = ingenic_drm_bind,
.unbind = ingenic_drm_unbind,
};
static int ingenic_drm_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct component_match *match = NULL;
struct device_node *np;
if (!IS_ENABLED(CONFIG_DRM_INGENIC_IPU))
return ingenic_drm_bind(dev);
/* IPU is at port address 8 */
np = of_graph_get_remote_node(dev->of_node, 8, 0);
if (!np) {
dev_err(dev, "Unable to get IPU node\n");
return -EINVAL;
}
drm_of_component_match_add(dev, &match, compare_of, np);
return component_master_add_with_match(dev, &ingenic_master_ops, match);
}
static int ingenic_drm_remove(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
if (!IS_ENABLED(CONFIG_DRM_INGENIC_IPU))
ingenic_drm_unbind(dev);
else
component_master_del(dev, &ingenic_master_ops);
return 0;
}
static const struct jz_soc_info jz4740_soc_info = {
.needs_dev_clk = true,
.has_osd = false,
.max_width = 800,
.max_height = 600,
};
static const struct jz_soc_info jz4725b_soc_info = {
.needs_dev_clk = false,
.has_osd = true,
.max_width = 800,
.max_height = 600,
};
static const struct jz_soc_info jz4770_soc_info = {
.needs_dev_clk = false,
.has_osd = true,
.max_width = 1280,
.max_height = 720,
};
@ -823,7 +1031,29 @@ static struct platform_driver ingenic_drm_driver = {
.probe = ingenic_drm_probe,
.remove = ingenic_drm_remove,
};
module_platform_driver(ingenic_drm_driver);
static int ingenic_drm_init(void)
{
int err;
if (IS_ENABLED(CONFIG_DRM_INGENIC_IPU)) {
err = platform_driver_register(ingenic_ipu_driver_ptr);
if (err)
return err;
}
return platform_driver_register(&ingenic_drm_driver);
}
module_init(ingenic_drm_init);
static void ingenic_drm_exit(void)
{
platform_driver_unregister(&ingenic_drm_driver);
if (IS_ENABLED(CONFIG_DRM_INGENIC_IPU))
platform_driver_unregister(ingenic_ipu_driver_ptr);
}
module_exit(ingenic_drm_exit);
MODULE_AUTHOR("Paul Cercueil <paul@crapouillou.net>");
MODULE_DESCRIPTION("DRM driver for the Ingenic SoCs\n");

View File

@ -0,0 +1,173 @@
/* SPDX-License-Identifier: GPL-2.0 */
//
// Ingenic JZ47xx KMS driver - Register definitions and private API
//
// Copyright (C) 2020, Paul Cercueil <paul@crapouillou.net>
#ifndef DRIVERS_GPU_DRM_INGENIC_INGENIC_DRM_H
#define DRIVERS_GPU_DRM_INGENIC_INGENIC_DRM_H
#include <linux/bitops.h>
#include <linux/types.h>
#define JZ_REG_LCD_CFG 0x00
#define JZ_REG_LCD_VSYNC 0x04
#define JZ_REG_LCD_HSYNC 0x08
#define JZ_REG_LCD_VAT 0x0C
#define JZ_REG_LCD_DAH 0x10
#define JZ_REG_LCD_DAV 0x14
#define JZ_REG_LCD_PS 0x18
#define JZ_REG_LCD_CLS 0x1C
#define JZ_REG_LCD_SPL 0x20
#define JZ_REG_LCD_REV 0x24
#define JZ_REG_LCD_CTRL 0x30
#define JZ_REG_LCD_STATE 0x34
#define JZ_REG_LCD_IID 0x38
#define JZ_REG_LCD_DA0 0x40
#define JZ_REG_LCD_SA0 0x44
#define JZ_REG_LCD_FID0 0x48
#define JZ_REG_LCD_CMD0 0x4C
#define JZ_REG_LCD_DA1 0x50
#define JZ_REG_LCD_SA1 0x54
#define JZ_REG_LCD_FID1 0x58
#define JZ_REG_LCD_CMD1 0x5C
#define JZ_REG_LCD_OSDC 0x100
#define JZ_REG_LCD_OSDCTRL 0x104
#define JZ_REG_LCD_OSDS 0x108
#define JZ_REG_LCD_BGC 0x10c
#define JZ_REG_LCD_KEY0 0x110
#define JZ_REG_LCD_KEY1 0x114
#define JZ_REG_LCD_ALPHA 0x118
#define JZ_REG_LCD_IPUR 0x11c
#define JZ_REG_LCD_XYP0 0x120
#define JZ_REG_LCD_XYP1 0x124
#define JZ_REG_LCD_SIZE0 0x128
#define JZ_REG_LCD_SIZE1 0x12c
#define JZ_LCD_CFG_SLCD BIT(31)
#define JZ_LCD_CFG_PS_DISABLE BIT(23)
#define JZ_LCD_CFG_CLS_DISABLE BIT(22)
#define JZ_LCD_CFG_SPL_DISABLE BIT(21)
#define JZ_LCD_CFG_REV_DISABLE BIT(20)
#define JZ_LCD_CFG_HSYNCM BIT(19)
#define JZ_LCD_CFG_PCLKM BIT(18)
#define JZ_LCD_CFG_INV BIT(17)
#define JZ_LCD_CFG_SYNC_DIR BIT(16)
#define JZ_LCD_CFG_PS_POLARITY BIT(15)
#define JZ_LCD_CFG_CLS_POLARITY BIT(14)
#define JZ_LCD_CFG_SPL_POLARITY BIT(13)
#define JZ_LCD_CFG_REV_POLARITY BIT(12)
#define JZ_LCD_CFG_HSYNC_ACTIVE_LOW BIT(11)
#define JZ_LCD_CFG_PCLK_FALLING_EDGE BIT(10)
#define JZ_LCD_CFG_DE_ACTIVE_LOW BIT(9)
#define JZ_LCD_CFG_VSYNC_ACTIVE_LOW BIT(8)
#define JZ_LCD_CFG_18_BIT BIT(7)
#define JZ_LCD_CFG_PDW (BIT(5) | BIT(4))
#define JZ_LCD_CFG_MODE_GENERIC_16BIT 0
#define JZ_LCD_CFG_MODE_GENERIC_18BIT BIT(7)
#define JZ_LCD_CFG_MODE_GENERIC_24BIT BIT(6)
#define JZ_LCD_CFG_MODE_SPECIAL_TFT_1 1
#define JZ_LCD_CFG_MODE_SPECIAL_TFT_2 2
#define JZ_LCD_CFG_MODE_SPECIAL_TFT_3 3
#define JZ_LCD_CFG_MODE_TV_OUT_P 4
#define JZ_LCD_CFG_MODE_TV_OUT_I 6
#define JZ_LCD_CFG_MODE_SINGLE_COLOR_STN 8
#define JZ_LCD_CFG_MODE_SINGLE_MONOCHROME_STN 9
#define JZ_LCD_CFG_MODE_DUAL_COLOR_STN 10
#define JZ_LCD_CFG_MODE_DUAL_MONOCHROME_STN 11
#define JZ_LCD_CFG_MODE_8BIT_SERIAL 12
#define JZ_LCD_CFG_MODE_LCM 13
#define JZ_LCD_VSYNC_VPS_OFFSET 16
#define JZ_LCD_VSYNC_VPE_OFFSET 0
#define JZ_LCD_HSYNC_HPS_OFFSET 16
#define JZ_LCD_HSYNC_HPE_OFFSET 0
#define JZ_LCD_VAT_HT_OFFSET 16
#define JZ_LCD_VAT_VT_OFFSET 0
#define JZ_LCD_DAH_HDS_OFFSET 16
#define JZ_LCD_DAH_HDE_OFFSET 0
#define JZ_LCD_DAV_VDS_OFFSET 16
#define JZ_LCD_DAV_VDE_OFFSET 0
#define JZ_LCD_CTRL_BURST_4 (0x0 << 28)
#define JZ_LCD_CTRL_BURST_8 (0x1 << 28)
#define JZ_LCD_CTRL_BURST_16 (0x2 << 28)
#define JZ_LCD_CTRL_RGB555 BIT(27)
#define JZ_LCD_CTRL_OFUP BIT(26)
#define JZ_LCD_CTRL_FRC_GRAYSCALE_16 (0x0 << 24)
#define JZ_LCD_CTRL_FRC_GRAYSCALE_4 (0x1 << 24)
#define JZ_LCD_CTRL_FRC_GRAYSCALE_2 (0x2 << 24)
#define JZ_LCD_CTRL_PDD_MASK (0xff << 16)
#define JZ_LCD_CTRL_EOF_IRQ BIT(13)
#define JZ_LCD_CTRL_SOF_IRQ BIT(12)
#define JZ_LCD_CTRL_OFU_IRQ BIT(11)
#define JZ_LCD_CTRL_IFU0_IRQ BIT(10)
#define JZ_LCD_CTRL_IFU1_IRQ BIT(9)
#define JZ_LCD_CTRL_DD_IRQ BIT(8)
#define JZ_LCD_CTRL_QDD_IRQ BIT(7)
#define JZ_LCD_CTRL_REVERSE_ENDIAN BIT(6)
#define JZ_LCD_CTRL_LSB_FISRT BIT(5)
#define JZ_LCD_CTRL_DISABLE BIT(4)
#define JZ_LCD_CTRL_ENABLE BIT(3)
#define JZ_LCD_CTRL_BPP_1 0x0
#define JZ_LCD_CTRL_BPP_2 0x1
#define JZ_LCD_CTRL_BPP_4 0x2
#define JZ_LCD_CTRL_BPP_8 0x3
#define JZ_LCD_CTRL_BPP_15_16 0x4
#define JZ_LCD_CTRL_BPP_18_24 0x5
#define JZ_LCD_CTRL_BPP_MASK (JZ_LCD_CTRL_RGB555 | 0x7)
#define JZ_LCD_CMD_SOF_IRQ BIT(31)
#define JZ_LCD_CMD_EOF_IRQ BIT(30)
#define JZ_LCD_CMD_ENABLE_PAL BIT(28)
#define JZ_LCD_SYNC_MASK 0x3ff
#define JZ_LCD_STATE_EOF_IRQ BIT(5)
#define JZ_LCD_STATE_SOF_IRQ BIT(4)
#define JZ_LCD_STATE_DISABLED BIT(0)
#define JZ_LCD_OSDC_OSDEN BIT(0)
#define JZ_LCD_OSDC_F0EN BIT(3)
#define JZ_LCD_OSDC_F1EN BIT(4)
#define JZ_LCD_OSDCTRL_IPU BIT(15)
#define JZ_LCD_OSDCTRL_RGB555 BIT(4)
#define JZ_LCD_OSDCTRL_CHANGE BIT(3)
#define JZ_LCD_OSDCTRL_BPP_15_16 0x4
#define JZ_LCD_OSDCTRL_BPP_18_24 0x5
#define JZ_LCD_OSDCTRL_BPP_30 0x7
#define JZ_LCD_OSDCTRL_BPP_MASK (JZ_LCD_OSDCTRL_RGB555 | 0x7)
#define JZ_LCD_OSDS_READY BIT(0)
#define JZ_LCD_IPUR_IPUREN BIT(31)
#define JZ_LCD_IPUR_IPUR_LSB 0
#define JZ_LCD_XYP01_XPOS_LSB 0
#define JZ_LCD_XYP01_YPOS_LSB 16
#define JZ_LCD_SIZE01_WIDTH_LSB 0
#define JZ_LCD_SIZE01_HEIGHT_LSB 16
struct device;
struct drm_plane;
struct drm_plane_state;
struct platform_driver;
void ingenic_drm_plane_config(struct device *dev,
struct drm_plane *plane, u32 fourcc);
void ingenic_drm_plane_disable(struct device *dev, struct drm_plane *plane);
extern struct platform_driver *ingenic_ipu_driver_ptr;
#endif /* DRIVERS_GPU_DRM_INGENIC_INGENIC_DRM_H */

View File

@ -0,0 +1,853 @@
// SPDX-License-Identifier: GPL-2.0
//
// Ingenic JZ47xx IPU driver
//
// Copyright (C) 2020, Paul Cercueil <paul@crapouillou.net>
// Copyright (C) 2020, Daniel Silsby <dansilsby@gmail.com>
#include "ingenic-drm.h"
#include "ingenic-ipu.h"
#include <linux/clk.h>
#include <linux/component.h>
#include <linux/gcd.h>
#include <linux/interrupt.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/of_device.h>
#include <linux/regmap.h>
#include <linux/time.h>
#include <drm/drm_atomic.h>
#include <drm/drm_atomic_helper.h>
#include <drm/drm_drv.h>
#include <drm/drm_fb_cma_helper.h>
#include <drm/drm_fourcc.h>
#include <drm/drm_gem_framebuffer_helper.h>
#include <drm/drm_plane.h>
#include <drm/drm_plane_helper.h>
#include <drm/drm_property.h>
#include <drm/drm_vblank.h>
struct ingenic_ipu;
struct soc_info {
const u32 *formats;
size_t num_formats;
bool has_bicubic;
void (*set_coefs)(struct ingenic_ipu *ipu, unsigned int reg,
unsigned int sharpness, bool downscale,
unsigned int weight, unsigned int offset);
};
struct ingenic_ipu {
struct drm_plane plane;
struct drm_device *drm;
struct device *dev, *master;
struct regmap *map;
struct clk *clk;
const struct soc_info *soc_info;
unsigned int num_w, num_h, denom_w, denom_h;
dma_addr_t addr_y, addr_u, addr_v;
struct drm_property *sharpness_prop;
unsigned int sharpness;
};
/* Signed 15.16 fixed-point math (for bicubic scaling coefficients) */
#define I2F(i) ((s32)(i) * 65536)
#define F2I(f) ((f) / 65536)
#define FMUL(fa, fb) ((s32)(((s64)(fa) * (s64)(fb)) / 65536))
#define SHARPNESS_INCR (I2F(-1) / 8)
static inline struct ingenic_ipu *plane_to_ingenic_ipu(struct drm_plane *plane)
{
return container_of(plane, struct ingenic_ipu, plane);
}
/*
* Apply conventional cubic convolution kernel. Both parameters
* and return value are 15.16 signed fixed-point.
*
* @f_a: Sharpness factor, typically in range [-4.0, -0.25].
* A larger magnitude increases perceived sharpness, but going past
* -2.0 might cause ringing artifacts to outweigh any improvement.
* Nice values on a 320x240 LCD are between -0.75 and -2.0.
*
* @f_x: Absolute distance in pixels from 'pixel 0' sample position
* along horizontal (or vertical) source axis. Range is [0, +2.0].
*
* returns: Weight of this pixel within 4-pixel sample group. Range is
* [-2.0, +2.0]. For moderate (i.e. > -3.0) sharpness factors,
* range is within [-1.0, +1.0].
*/
static inline s32 cubic_conv(s32 f_a, s32 f_x)
{
const s32 f_1 = I2F(1);
const s32 f_2 = I2F(2);
const s32 f_3 = I2F(3);
const s32 f_4 = I2F(4);
const s32 f_x2 = FMUL(f_x, f_x);
const s32 f_x3 = FMUL(f_x, f_x2);
if (f_x <= f_1)
return FMUL((f_a + f_2), f_x3) - FMUL((f_a + f_3), f_x2) + f_1;
else if (f_x <= f_2)
return FMUL(f_a, (f_x3 - 5 * f_x2 + 8 * f_x - f_4));
else
return 0;
}
/*
* On entry, "weight" is a coefficient suitable for bilinear mode,
* which is converted to a set of four suitable for bicubic mode.
*
* "weight 512" means all of pixel 0;
* "weight 256" means half of pixel 0 and half of pixel 1;
* "weight 0" means all of pixel 1;
*
* "offset" is increment to next source pixel sample location.
*/
static void jz4760_set_coefs(struct ingenic_ipu *ipu, unsigned int reg,
unsigned int sharpness, bool downscale,
unsigned int weight, unsigned int offset)
{
u32 val;
s32 w0, w1, w2, w3; /* Pixel weights at X (or Y) offsets -1,0,1,2 */
weight = clamp_val(weight, 0, 512);
if (sharpness < 2) {
/*
* When sharpness setting is 0, emulate nearest-neighbor.
* When sharpness setting is 1, emulate bilinear.
*/
if (sharpness == 0)
weight = weight >= 256 ? 512 : 0;
w0 = 0;
w1 = weight;
w2 = 512 - weight;
w3 = 0;
} else {
const s32 f_a = SHARPNESS_INCR * sharpness;
const s32 f_h = I2F(1) / 2; /* Round up 0.5 */
/*
* Note that always rounding towards +infinity here is intended.
* The resulting coefficients match a round-to-nearest-int
* double floating-point implementation.
*/
weight = 512 - weight;
w0 = F2I(f_h + 512 * cubic_conv(f_a, I2F(512 + weight) / 512));
w1 = F2I(f_h + 512 * cubic_conv(f_a, I2F(0 + weight) / 512));
w2 = F2I(f_h + 512 * cubic_conv(f_a, I2F(512 - weight) / 512));
w3 = F2I(f_h + 512 * cubic_conv(f_a, I2F(1024 - weight) / 512));
w0 = clamp_val(w0, -1024, 1023);
w1 = clamp_val(w1, -1024, 1023);
w2 = clamp_val(w2, -1024, 1023);
w3 = clamp_val(w3, -1024, 1023);
}
val = ((w1 & JZ4760_IPU_RSZ_COEF_MASK) << JZ4760_IPU_RSZ_COEF31_LSB) |
((w0 & JZ4760_IPU_RSZ_COEF_MASK) << JZ4760_IPU_RSZ_COEF20_LSB);
regmap_write(ipu->map, reg, val);
val = ((w3 & JZ4760_IPU_RSZ_COEF_MASK) << JZ4760_IPU_RSZ_COEF31_LSB) |
((w2 & JZ4760_IPU_RSZ_COEF_MASK) << JZ4760_IPU_RSZ_COEF20_LSB) |
((offset & JZ4760_IPU_RSZ_OFFSET_MASK) << JZ4760_IPU_RSZ_OFFSET_LSB);
regmap_write(ipu->map, reg, val);
}
static void jz4725b_set_coefs(struct ingenic_ipu *ipu, unsigned int reg,
unsigned int sharpness, bool downscale,
unsigned int weight, unsigned int offset)
{
u32 val = JZ4725B_IPU_RSZ_LUT_OUT_EN;
unsigned int i;
weight = clamp_val(weight, 0, 512);
if (sharpness == 0)
weight = weight >= 256 ? 512 : 0;
val |= (weight & JZ4725B_IPU_RSZ_LUT_COEF_MASK) << JZ4725B_IPU_RSZ_LUT_COEF_LSB;
if (downscale || !!offset)
val |= JZ4725B_IPU_RSZ_LUT_IN_EN;
regmap_write(ipu->map, reg, val);
if (downscale) {
for (i = 1; i < offset; i++)
regmap_write(ipu->map, reg, JZ4725B_IPU_RSZ_LUT_IN_EN);
}
}
static void ingenic_ipu_set_downscale_coefs(struct ingenic_ipu *ipu,
unsigned int reg,
unsigned int num,
unsigned int denom)
{
unsigned int i, offset, weight, weight_num = denom;
for (i = 0; i < num; i++) {
weight_num = num + (weight_num - num) % (num * 2);
weight = 512 - 512 * (weight_num - num) / (num * 2);
weight_num += denom * 2;
offset = (weight_num - num) / (num * 2);
ipu->soc_info->set_coefs(ipu, reg, ipu->sharpness,
true, weight, offset);
}
}
static void ingenic_ipu_set_integer_upscale_coefs(struct ingenic_ipu *ipu,
unsigned int reg,
unsigned int num)
{
/*
* Force nearest-neighbor scaling and use simple math when upscaling
* by an integer ratio. It looks better, and fixes a few problem cases.
*/
unsigned int i;
for (i = 0; i < num; i++)
ipu->soc_info->set_coefs(ipu, reg, 0, false, 512, i == num - 1);
}
static void ingenic_ipu_set_upscale_coefs(struct ingenic_ipu *ipu,
unsigned int reg,
unsigned int num,
unsigned int denom)
{
unsigned int i, offset, weight, weight_num = 0;
for (i = 0; i < num; i++) {
weight = 512 - 512 * weight_num / num;
weight_num += denom;
offset = weight_num >= num;
if (offset)
weight_num -= num;
ipu->soc_info->set_coefs(ipu, reg, ipu->sharpness,
false, weight, offset);
}
}
static void ingenic_ipu_set_coefs(struct ingenic_ipu *ipu, unsigned int reg,
unsigned int num, unsigned int denom)
{
/* Begin programming the LUT */
regmap_write(ipu->map, reg, -1);
if (denom > num)
ingenic_ipu_set_downscale_coefs(ipu, reg, num, denom);
else if (denom == 1)
ingenic_ipu_set_integer_upscale_coefs(ipu, reg, num);
else
ingenic_ipu_set_upscale_coefs(ipu, reg, num, denom);
}
static int reduce_fraction(unsigned int *num, unsigned int *denom)
{
unsigned long d = gcd(*num, *denom);
/* The scaling table has only 31 entries */
if (*num > 31 * d)
return -EINVAL;
*num /= d;
*denom /= d;
return 0;
}
static inline bool osd_changed(struct drm_plane_state *state,
struct drm_plane_state *oldstate)
{
return state->src_x != oldstate->src_x ||
state->src_y != oldstate->src_y ||
state->src_w != oldstate->src_w ||
state->src_h != oldstate->src_h ||
state->crtc_x != oldstate->crtc_x ||
state->crtc_y != oldstate->crtc_y ||
state->crtc_w != oldstate->crtc_w ||
state->crtc_h != oldstate->crtc_h;
}
static void ingenic_ipu_plane_atomic_update(struct drm_plane *plane,
struct drm_plane_state *oldstate)
{
struct ingenic_ipu *ipu = plane_to_ingenic_ipu(plane);
struct drm_plane_state *state = plane->state;
const struct drm_format_info *finfo;
u32 ctrl, stride = 0, coef_index = 0, format = 0;
bool needs_modeset, upscaling_w, upscaling_h;
if (!state || !state->fb)
return;
finfo = drm_format_info(state->fb->format->format);
/* Reset all the registers if needed */
needs_modeset = drm_atomic_crtc_needs_modeset(state->crtc->state);
if (needs_modeset) {
regmap_set_bits(ipu->map, JZ_REG_IPU_CTRL, JZ_IPU_CTRL_RST);
/* Enable the chip */
regmap_set_bits(ipu->map, JZ_REG_IPU_CTRL,
JZ_IPU_CTRL_CHIP_EN | JZ_IPU_CTRL_LCDC_SEL);
}
/* New addresses will be committed in vblank handler... */
ipu->addr_y = drm_fb_cma_get_gem_addr(state->fb, state, 0);
if (finfo->num_planes > 1)
ipu->addr_u = drm_fb_cma_get_gem_addr(state->fb, state, 1);
if (finfo->num_planes > 2)
ipu->addr_v = drm_fb_cma_get_gem_addr(state->fb, state, 2);
if (!needs_modeset)
return;
/* Or right here if we're doing a full modeset. */
regmap_write(ipu->map, JZ_REG_IPU_Y_ADDR, ipu->addr_y);
regmap_write(ipu->map, JZ_REG_IPU_U_ADDR, ipu->addr_u);
regmap_write(ipu->map, JZ_REG_IPU_V_ADDR, ipu->addr_v);
if (finfo->num_planes == 1)
regmap_set_bits(ipu->map, JZ_REG_IPU_CTRL, JZ_IPU_CTRL_SPKG_SEL);
ingenic_drm_plane_config(ipu->master, plane, DRM_FORMAT_XRGB8888);
/* Set the input height/width/strides */
if (finfo->num_planes > 2)
stride = ((state->src_w >> 16) * finfo->cpp[2] / finfo->hsub)
<< JZ_IPU_UV_STRIDE_V_LSB;
if (finfo->num_planes > 1)
stride |= ((state->src_w >> 16) * finfo->cpp[1] / finfo->hsub)
<< JZ_IPU_UV_STRIDE_U_LSB;
regmap_write(ipu->map, JZ_REG_IPU_UV_STRIDE, stride);
stride = ((state->src_w >> 16) * finfo->cpp[0]) << JZ_IPU_Y_STRIDE_Y_LSB;
regmap_write(ipu->map, JZ_REG_IPU_Y_STRIDE, stride);
regmap_write(ipu->map, JZ_REG_IPU_IN_GS,
(stride << JZ_IPU_IN_GS_W_LSB) |
((state->src_h >> 16) << JZ_IPU_IN_GS_H_LSB));
switch (finfo->format) {
case DRM_FORMAT_XRGB1555:
format = JZ_IPU_D_FMT_IN_FMT_RGB555 |
JZ_IPU_D_FMT_RGB_OUT_OFT_RGB;
break;
case DRM_FORMAT_XBGR1555:
format = JZ_IPU_D_FMT_IN_FMT_RGB555 |
JZ_IPU_D_FMT_RGB_OUT_OFT_BGR;
break;
case DRM_FORMAT_RGB565:
format = JZ_IPU_D_FMT_IN_FMT_RGB565 |
JZ_IPU_D_FMT_RGB_OUT_OFT_RGB;
break;
case DRM_FORMAT_BGR565:
format = JZ_IPU_D_FMT_IN_FMT_RGB565 |
JZ_IPU_D_FMT_RGB_OUT_OFT_BGR;
break;
case DRM_FORMAT_XRGB8888:
case DRM_FORMAT_XYUV8888:
format = JZ_IPU_D_FMT_IN_FMT_RGB888 |
JZ_IPU_D_FMT_RGB_OUT_OFT_RGB;
break;
case DRM_FORMAT_XBGR8888:
format = JZ_IPU_D_FMT_IN_FMT_RGB888 |
JZ_IPU_D_FMT_RGB_OUT_OFT_BGR;
break;
case DRM_FORMAT_YUYV:
format = JZ_IPU_D_FMT_IN_FMT_YUV422 |
JZ_IPU_D_FMT_YUV_VY1UY0;
break;
case DRM_FORMAT_YVYU:
format = JZ_IPU_D_FMT_IN_FMT_YUV422 |
JZ_IPU_D_FMT_YUV_UY1VY0;
break;
case DRM_FORMAT_UYVY:
format = JZ_IPU_D_FMT_IN_FMT_YUV422 |
JZ_IPU_D_FMT_YUV_Y1VY0U;
break;
case DRM_FORMAT_VYUY:
format = JZ_IPU_D_FMT_IN_FMT_YUV422 |
JZ_IPU_D_FMT_YUV_Y1UY0V;
break;
case DRM_FORMAT_YUV411:
format = JZ_IPU_D_FMT_IN_FMT_YUV411;
break;
case DRM_FORMAT_YUV420:
format = JZ_IPU_D_FMT_IN_FMT_YUV420;
break;
case DRM_FORMAT_YUV422:
format = JZ_IPU_D_FMT_IN_FMT_YUV422;
break;
case DRM_FORMAT_YUV444:
format = JZ_IPU_D_FMT_IN_FMT_YUV444;
break;
default:
WARN_ONCE(1, "Unsupported format");
break;
}
/* Fix output to RGB888 */
format |= JZ_IPU_D_FMT_OUT_FMT_RGB888;
/* Set pixel format */
regmap_write(ipu->map, JZ_REG_IPU_D_FMT, format);
/* Set the output height/width/stride */
regmap_write(ipu->map, JZ_REG_IPU_OUT_GS,
((state->crtc_w * 4) << JZ_IPU_OUT_GS_W_LSB)
| state->crtc_h << JZ_IPU_OUT_GS_H_LSB);
regmap_write(ipu->map, JZ_REG_IPU_OUT_STRIDE, state->crtc_w * 4);
if (finfo->is_yuv) {
regmap_set_bits(ipu->map, JZ_REG_IPU_CTRL, JZ_IPU_CTRL_CSC_EN);
/*
* Offsets for Chroma/Luma.
* y = source Y - LUMA,
* u = source Cb - CHROMA,
* v = source Cr - CHROMA
*/
regmap_write(ipu->map, JZ_REG_IPU_CSC_OFFSET,
128 << JZ_IPU_CSC_OFFSET_CHROMA_LSB |
0 << JZ_IPU_CSC_OFFSET_LUMA_LSB);
/*
* YUV422 to RGB conversion table.
* R = C0 / 0x400 * y + C1 / 0x400 * v
* G = C0 / 0x400 * y - C2 / 0x400 * u - C3 / 0x400 * v
* B = C0 / 0x400 * y + C4 / 0x400 * u
*/
regmap_write(ipu->map, JZ_REG_IPU_CSC_C0_COEF, 0x4a8);
regmap_write(ipu->map, JZ_REG_IPU_CSC_C1_COEF, 0x662);
regmap_write(ipu->map, JZ_REG_IPU_CSC_C2_COEF, 0x191);
regmap_write(ipu->map, JZ_REG_IPU_CSC_C3_COEF, 0x341);
regmap_write(ipu->map, JZ_REG_IPU_CSC_C4_COEF, 0x811);
}
ctrl = 0;
/*
* Must set ZOOM_SEL before programming bicubic LUTs.
* If the IPU supports bicubic, we enable it unconditionally, since it
* can do anything bilinear can and more.
*/
if (ipu->soc_info->has_bicubic)
ctrl |= JZ_IPU_CTRL_ZOOM_SEL;
upscaling_w = ipu->num_w > ipu->denom_w;
if (upscaling_w)
ctrl |= JZ_IPU_CTRL_HSCALE;
if (ipu->num_w != 1 || ipu->denom_w != 1) {
if (!ipu->soc_info->has_bicubic && !upscaling_w)
coef_index |= (ipu->denom_w - 1) << 16;
else
coef_index |= (ipu->num_w - 1) << 16;
ctrl |= JZ_IPU_CTRL_HRSZ_EN;
}
upscaling_h = ipu->num_h > ipu->denom_h;
if (upscaling_h)
ctrl |= JZ_IPU_CTRL_VSCALE;
if (ipu->num_h != 1 || ipu->denom_h != 1) {
if (!ipu->soc_info->has_bicubic && !upscaling_h)
coef_index |= ipu->denom_h - 1;
else
coef_index |= ipu->num_h - 1;
ctrl |= JZ_IPU_CTRL_VRSZ_EN;
}
regmap_update_bits(ipu->map, JZ_REG_IPU_CTRL, JZ_IPU_CTRL_ZOOM_SEL |
JZ_IPU_CTRL_HRSZ_EN | JZ_IPU_CTRL_VRSZ_EN |
JZ_IPU_CTRL_HSCALE | JZ_IPU_CTRL_VSCALE, ctrl);
/* Set the LUT index register */
regmap_write(ipu->map, JZ_REG_IPU_RSZ_COEF_INDEX, coef_index);
if (ipu->num_w != 1 || ipu->denom_w != 1)
ingenic_ipu_set_coefs(ipu, JZ_REG_IPU_HRSZ_COEF_LUT,
ipu->num_w, ipu->denom_w);
if (ipu->num_h != 1 || ipu->denom_h != 1)
ingenic_ipu_set_coefs(ipu, JZ_REG_IPU_VRSZ_COEF_LUT,
ipu->num_h, ipu->denom_h);
/* Clear STATUS register */
regmap_write(ipu->map, JZ_REG_IPU_STATUS, 0);
/* Start IPU */
regmap_set_bits(ipu->map, JZ_REG_IPU_CTRL,
JZ_IPU_CTRL_RUN | JZ_IPU_CTRL_FM_IRQ_EN);
dev_dbg(ipu->dev, "Scaling %ux%u to %ux%u (%u:%u horiz, %u:%u vert)\n",
state->src_w >> 16, state->src_h >> 16,
state->crtc_w, state->crtc_h,
ipu->num_w, ipu->denom_w, ipu->num_h, ipu->denom_h);
}
static int ingenic_ipu_plane_atomic_check(struct drm_plane *plane,
struct drm_plane_state *state)
{
unsigned int num_w, denom_w, num_h, denom_h, xres, yres;
struct ingenic_ipu *ipu = plane_to_ingenic_ipu(plane);
struct drm_crtc *crtc = state->crtc ?: plane->state->crtc;
struct drm_crtc_state *crtc_state;
if (!crtc)
return 0;
crtc_state = drm_atomic_get_existing_crtc_state(state->state, crtc);
if (WARN_ON(!crtc_state))
return -EINVAL;
/* Request a full modeset if we are enabling or disabling the IPU. */
if (!plane->state->crtc ^ !state->crtc)
crtc_state->mode_changed = true;
if (!state->crtc ||
!crtc_state->mode.hdisplay || !crtc_state->mode.vdisplay)
return 0;
/* Plane must be fully visible */
if (state->crtc_x < 0 || state->crtc_y < 0 ||
state->crtc_x + state->crtc_w > crtc_state->mode.hdisplay ||
state->crtc_y + state->crtc_h > crtc_state->mode.vdisplay)
return -EINVAL;
/* Minimum size is 4x4 */
if ((state->src_w >> 16) < 4 || (state->src_h >> 16) < 4)
return -EINVAL;
/* Input and output lines must have an even number of pixels. */
if (((state->src_w >> 16) & 1) || (state->crtc_w & 1))
return -EINVAL;
if (!osd_changed(state, plane->state))
return 0;
crtc_state->mode_changed = true;
xres = state->src_w >> 16;
yres = state->src_h >> 16;
/* Adjust the coefficients until we find a valid configuration */
for (denom_w = xres, num_w = state->crtc_w;
num_w <= crtc_state->mode.hdisplay; num_w++)
if (!reduce_fraction(&num_w, &denom_w))
break;
if (num_w > crtc_state->mode.hdisplay)
return -EINVAL;
for (denom_h = yres, num_h = state->crtc_h;
num_h <= crtc_state->mode.vdisplay; num_h++)
if (!reduce_fraction(&num_h, &denom_h))
break;
if (num_h > crtc_state->mode.vdisplay)
return -EINVAL;
ipu->num_w = num_w;
ipu->num_h = num_h;
ipu->denom_w = denom_w;
ipu->denom_h = denom_h;
return 0;
}
static void ingenic_ipu_plane_atomic_disable(struct drm_plane *plane,
struct drm_plane_state *old_state)
{
struct ingenic_ipu *ipu = plane_to_ingenic_ipu(plane);
regmap_set_bits(ipu->map, JZ_REG_IPU_CTRL, JZ_IPU_CTRL_STOP);
regmap_clear_bits(ipu->map, JZ_REG_IPU_CTRL, JZ_IPU_CTRL_CHIP_EN);
ingenic_drm_plane_disable(ipu->master, plane);
}
static const struct drm_plane_helper_funcs ingenic_ipu_plane_helper_funcs = {
.atomic_update = ingenic_ipu_plane_atomic_update,
.atomic_check = ingenic_ipu_plane_atomic_check,
.atomic_disable = ingenic_ipu_plane_atomic_disable,
.prepare_fb = drm_gem_fb_prepare_fb,
};
static int
ingenic_ipu_plane_atomic_get_property(struct drm_plane *plane,
const struct drm_plane_state *state,
struct drm_property *property, u64 *val)
{
struct ingenic_ipu *ipu = plane_to_ingenic_ipu(plane);
if (property != ipu->sharpness_prop)
return -EINVAL;
*val = ipu->sharpness;
return 0;
}
static int
ingenic_ipu_plane_atomic_set_property(struct drm_plane *plane,
struct drm_plane_state *state,
struct drm_property *property, u64 val)
{
struct ingenic_ipu *ipu = plane_to_ingenic_ipu(plane);
struct drm_crtc_state *crtc_state;
if (property != ipu->sharpness_prop)
return -EINVAL;
ipu->sharpness = val;
if (state->crtc) {
crtc_state = drm_atomic_get_existing_crtc_state(state->state, state->crtc);
if (WARN_ON(!crtc_state))
return -EINVAL;
crtc_state->mode_changed = true;
}
return 0;
}
static const struct drm_plane_funcs ingenic_ipu_plane_funcs = {
.update_plane = drm_atomic_helper_update_plane,
.disable_plane = drm_atomic_helper_disable_plane,
.reset = drm_atomic_helper_plane_reset,
.destroy = drm_plane_cleanup,
.atomic_duplicate_state = drm_atomic_helper_plane_duplicate_state,
.atomic_destroy_state = drm_atomic_helper_plane_destroy_state,
.atomic_get_property = ingenic_ipu_plane_atomic_get_property,
.atomic_set_property = ingenic_ipu_plane_atomic_set_property,
};
static irqreturn_t ingenic_ipu_irq_handler(int irq, void *arg)
{
struct ingenic_ipu *ipu = arg;
struct drm_crtc *crtc = drm_crtc_from_index(ipu->drm, 0);
unsigned int dummy;
/* dummy read allows CPU to reconfigure IPU */
regmap_read(ipu->map, JZ_REG_IPU_STATUS, &dummy);
/* ACK interrupt */
regmap_write(ipu->map, JZ_REG_IPU_STATUS, 0);
/* Set previously cached addresses */
regmap_write(ipu->map, JZ_REG_IPU_Y_ADDR, ipu->addr_y);
regmap_write(ipu->map, JZ_REG_IPU_U_ADDR, ipu->addr_u);
regmap_write(ipu->map, JZ_REG_IPU_V_ADDR, ipu->addr_v);
/* Run IPU for the new frame */
regmap_set_bits(ipu->map, JZ_REG_IPU_CTRL, JZ_IPU_CTRL_RUN);
drm_crtc_handle_vblank(crtc);
return IRQ_HANDLED;
}
static const struct regmap_config ingenic_ipu_regmap_config = {
.reg_bits = 32,
.val_bits = 32,
.reg_stride = 4,
.max_register = JZ_REG_IPU_OUT_PHY_T_ADDR,
};
static int ingenic_ipu_bind(struct device *dev, struct device *master, void *d)
{
struct platform_device *pdev = to_platform_device(dev);
const struct soc_info *soc_info;
struct drm_device *drm = d;
struct drm_plane *plane;
struct ingenic_ipu *ipu;
void __iomem *base;
unsigned int sharpness_max;
int err, irq;
ipu = devm_kzalloc(dev, sizeof(*ipu), GFP_KERNEL);
if (!ipu)
return -ENOMEM;
soc_info = of_device_get_match_data(dev);
if (!soc_info) {
dev_err(dev, "Missing platform data\n");
return -EINVAL;
}
ipu->dev = dev;
ipu->drm = drm;
ipu->master = master;
ipu->soc_info = soc_info;
base = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(base)) {
dev_err(dev, "Failed to get memory resource\n");
return PTR_ERR(base);
}
ipu->map = devm_regmap_init_mmio(dev, base, &ingenic_ipu_regmap_config);
if (IS_ERR(ipu->map)) {
dev_err(dev, "Failed to create regmap\n");
return PTR_ERR(ipu->map);
}
irq = platform_get_irq(pdev, 0);
if (irq < 0)
return irq;
ipu->clk = devm_clk_get(dev, "ipu");
if (IS_ERR(ipu->clk)) {
dev_err(dev, "Failed to get pixel clock\n");
return PTR_ERR(ipu->clk);
}
err = devm_request_irq(dev, irq, ingenic_ipu_irq_handler, 0,
dev_name(dev), ipu);
if (err) {
dev_err(dev, "Unable to request IRQ\n");
return err;
}
plane = &ipu->plane;
dev_set_drvdata(dev, plane);
drm_plane_helper_add(plane, &ingenic_ipu_plane_helper_funcs);
err = drm_universal_plane_init(drm, plane, 1, &ingenic_ipu_plane_funcs,
soc_info->formats, soc_info->num_formats,
NULL, DRM_PLANE_TYPE_PRIMARY, NULL);
if (err) {
dev_err(dev, "Failed to init plane: %i\n", err);
return err;
}
/*
* Sharpness settings range is [0,32]
* 0 : nearest-neighbor
* 1 : bilinear
* 2 .. 32 : bicubic (translated to sharpness factor -0.25 .. -4.0)
*/
sharpness_max = soc_info->has_bicubic ? 32 : 1;
ipu->sharpness_prop = drm_property_create_range(drm, 0, "sharpness",
0, sharpness_max);
if (!ipu->sharpness_prop) {
dev_err(dev, "Unable to create sharpness property\n");
return -ENOMEM;
}
/* Default sharpness factor: -0.125 * 8 = -1.0 */
ipu->sharpness = soc_info->has_bicubic ? 8 : 1;
drm_object_attach_property(&plane->base, ipu->sharpness_prop,
ipu->sharpness);
err = clk_prepare_enable(ipu->clk);
if (err) {
dev_err(dev, "Unable to enable clock\n");
return err;
}
return 0;
}
static void ingenic_ipu_unbind(struct device *dev,
struct device *master, void *d)
{
struct ingenic_ipu *ipu = dev_get_drvdata(dev);
clk_disable_unprepare(ipu->clk);
}
static const struct component_ops ingenic_ipu_ops = {
.bind = ingenic_ipu_bind,
.unbind = ingenic_ipu_unbind,
};
static int ingenic_ipu_probe(struct platform_device *pdev)
{
return component_add(&pdev->dev, &ingenic_ipu_ops);
}
static int ingenic_ipu_remove(struct platform_device *pdev)
{
component_del(&pdev->dev, &ingenic_ipu_ops);
return 0;
}
static const u32 jz4725b_ipu_formats[] = {
DRM_FORMAT_YUYV,
DRM_FORMAT_YVYU,
DRM_FORMAT_UYVY,
DRM_FORMAT_VYUY,
DRM_FORMAT_YUV411,
DRM_FORMAT_YUV420,
DRM_FORMAT_YUV422,
DRM_FORMAT_YUV444,
};
static const struct soc_info jz4725b_soc_info = {
.formats = jz4725b_ipu_formats,
.num_formats = ARRAY_SIZE(jz4725b_ipu_formats),
.has_bicubic = false,
.set_coefs = jz4725b_set_coefs,
};
static const u32 jz4760_ipu_formats[] = {
DRM_FORMAT_XRGB1555,
DRM_FORMAT_XBGR1555,
DRM_FORMAT_RGB565,
DRM_FORMAT_BGR565,
DRM_FORMAT_XRGB8888,
DRM_FORMAT_XBGR8888,
DRM_FORMAT_YUYV,
DRM_FORMAT_YVYU,
DRM_FORMAT_UYVY,
DRM_FORMAT_VYUY,
DRM_FORMAT_YUV411,
DRM_FORMAT_YUV420,
DRM_FORMAT_YUV422,
DRM_FORMAT_YUV444,
DRM_FORMAT_XYUV8888,
};
static const struct soc_info jz4760_soc_info = {
.formats = jz4760_ipu_formats,
.num_formats = ARRAY_SIZE(jz4760_ipu_formats),
.has_bicubic = true,
.set_coefs = jz4760_set_coefs,
};
static const struct of_device_id ingenic_ipu_of_match[] = {
{ .compatible = "ingenic,jz4725b-ipu", .data = &jz4725b_soc_info },
{ .compatible = "ingenic,jz4760-ipu", .data = &jz4760_soc_info },
{ /* sentinel */ },
};
MODULE_DEVICE_TABLE(of, ingenic_ipu_of_match);
static struct platform_driver ingenic_ipu_driver = {
.driver = {
.name = "ingenic-ipu",
.of_match_table = ingenic_ipu_of_match,
},
.probe = ingenic_ipu_probe,
.remove = ingenic_ipu_remove,
};
struct platform_driver *ingenic_ipu_driver_ptr = &ingenic_ipu_driver;

View File

@ -0,0 +1,109 @@
/* SPDX-License-Identifier: GPL-2.0 */
//
// Ingenic JZ47xx IPU - Register definitions and private API
//
// Copyright (C) 2020, Paul Cercueil <paul@crapouillou.net>
#ifndef DRIVERS_GPU_DRM_INGENIC_INGENIC_IPU_H
#define DRIVERS_GPU_DRM_INGENIC_INGENIC_IPU_H
#include <linux/bitops.h>
#define JZ_REG_IPU_CTRL 0x00
#define JZ_REG_IPU_STATUS 0x04
#define JZ_REG_IPU_D_FMT 0x08
#define JZ_REG_IPU_Y_ADDR 0x0c
#define JZ_REG_IPU_U_ADDR 0x10
#define JZ_REG_IPU_V_ADDR 0x14
#define JZ_REG_IPU_IN_GS 0x18
#define JZ_REG_IPU_Y_STRIDE 0x1c
#define JZ_REG_IPU_UV_STRIDE 0x20
#define JZ_REG_IPU_OUT_ADDR 0x24
#define JZ_REG_IPU_OUT_GS 0x28
#define JZ_REG_IPU_OUT_STRIDE 0x2c
#define JZ_REG_IPU_RSZ_COEF_INDEX 0x30
#define JZ_REG_IPU_CSC_C0_COEF 0x34
#define JZ_REG_IPU_CSC_C1_COEF 0x38
#define JZ_REG_IPU_CSC_C2_COEF 0x3c
#define JZ_REG_IPU_CSC_C3_COEF 0x40
#define JZ_REG_IPU_CSC_C4_COEF 0x44
#define JZ_REG_IPU_HRSZ_COEF_LUT 0x48
#define JZ_REG_IPU_VRSZ_COEF_LUT 0x4c
#define JZ_REG_IPU_CSC_OFFSET 0x50
#define JZ_REG_IPU_Y_PHY_T_ADDR 0x54
#define JZ_REG_IPU_U_PHY_T_ADDR 0x58
#define JZ_REG_IPU_V_PHY_T_ADDR 0x5c
#define JZ_REG_IPU_OUT_PHY_T_ADDR 0x60
#define JZ_IPU_CTRL_ADDR_SEL BIT(20)
#define JZ_IPU_CTRL_ZOOM_SEL BIT(18)
#define JZ_IPU_CTRL_DFIX_SEL BIT(17)
#define JZ_IPU_CTRL_LCDC_SEL BIT(11)
#define JZ_IPU_CTRL_SPKG_SEL BIT(10)
#define JZ_IPU_CTRL_VSCALE BIT(9)
#define JZ_IPU_CTRL_HSCALE BIT(8)
#define JZ_IPU_CTRL_STOP BIT(7)
#define JZ_IPU_CTRL_RST BIT(6)
#define JZ_IPU_CTRL_FM_IRQ_EN BIT(5)
#define JZ_IPU_CTRL_CSC_EN BIT(4)
#define JZ_IPU_CTRL_VRSZ_EN BIT(3)
#define JZ_IPU_CTRL_HRSZ_EN BIT(2)
#define JZ_IPU_CTRL_RUN BIT(1)
#define JZ_IPU_CTRL_CHIP_EN BIT(0)
#define JZ_IPU_STATUS_OUT_END BIT(0)
#define JZ_IPU_IN_GS_H_LSB 0x0
#define JZ_IPU_IN_GS_W_LSB 0x10
#define JZ_IPU_OUT_GS_H_LSB 0x0
#define JZ_IPU_OUT_GS_W_LSB 0x10
#define JZ_IPU_Y_STRIDE_Y_LSB 0
#define JZ_IPU_UV_STRIDE_U_LSB 16
#define JZ_IPU_UV_STRIDE_V_LSB 0
#define JZ_IPU_D_FMT_IN_FMT_LSB 0
#define JZ_IPU_D_FMT_IN_FMT_RGB555 (0x0 << JZ_IPU_D_FMT_IN_FMT_LSB)
#define JZ_IPU_D_FMT_IN_FMT_YUV420 (0x0 << JZ_IPU_D_FMT_IN_FMT_LSB)
#define JZ_IPU_D_FMT_IN_FMT_YUV422 (0x1 << JZ_IPU_D_FMT_IN_FMT_LSB)
#define JZ_IPU_D_FMT_IN_FMT_RGB888 (0x2 << JZ_IPU_D_FMT_IN_FMT_LSB)
#define JZ_IPU_D_FMT_IN_FMT_YUV444 (0x2 << JZ_IPU_D_FMT_IN_FMT_LSB)
#define JZ_IPU_D_FMT_IN_FMT_RGB565 (0x3 << JZ_IPU_D_FMT_IN_FMT_LSB)
#define JZ_IPU_D_FMT_YUV_FMT_LSB 2
#define JZ_IPU_D_FMT_YUV_Y1UY0V (0x0 << JZ_IPU_D_FMT_YUV_FMT_LSB)
#define JZ_IPU_D_FMT_YUV_Y1VY0U (0x1 << JZ_IPU_D_FMT_YUV_FMT_LSB)
#define JZ_IPU_D_FMT_YUV_UY1VY0 (0x2 << JZ_IPU_D_FMT_YUV_FMT_LSB)
#define JZ_IPU_D_FMT_YUV_VY1UY0 (0x3 << JZ_IPU_D_FMT_YUV_FMT_LSB)
#define JZ_IPU_D_FMT_IN_FMT_YUV411 (0x3 << JZ_IPU_D_FMT_IN_FMT_LSB)
#define JZ_IPU_D_FMT_OUT_FMT_LSB 19
#define JZ_IPU_D_FMT_OUT_FMT_RGB555 (0x0 << JZ_IPU_D_FMT_OUT_FMT_LSB)
#define JZ_IPU_D_FMT_OUT_FMT_RGB565 (0x1 << JZ_IPU_D_FMT_OUT_FMT_LSB)
#define JZ_IPU_D_FMT_OUT_FMT_RGB888 (0x2 << JZ_IPU_D_FMT_OUT_FMT_LSB)
#define JZ_IPU_D_FMT_OUT_FMT_YUV422 (0x3 << JZ_IPU_D_FMT_OUT_FMT_LSB)
#define JZ_IPU_D_FMT_OUT_FMT_RGBAAA (0x4 << JZ_IPU_D_FMT_OUT_FMT_LSB)
#define JZ_IPU_D_FMT_RGB_OUT_OFT_LSB 22
#define JZ_IPU_D_FMT_RGB_OUT_OFT_RGB (0x0 << JZ_IPU_D_FMT_RGB_OUT_OFT_LSB)
#define JZ_IPU_D_FMT_RGB_OUT_OFT_RBG (0x1 << JZ_IPU_D_FMT_RGB_OUT_OFT_LSB)
#define JZ_IPU_D_FMT_RGB_OUT_OFT_GBR (0x2 << JZ_IPU_D_FMT_RGB_OUT_OFT_LSB)
#define JZ_IPU_D_FMT_RGB_OUT_OFT_GRB (0x3 << JZ_IPU_D_FMT_RGB_OUT_OFT_LSB)
#define JZ_IPU_D_FMT_RGB_OUT_OFT_BRG (0x4 << JZ_IPU_D_FMT_RGB_OUT_OFT_LSB)
#define JZ_IPU_D_FMT_RGB_OUT_OFT_BGR (0x5 << JZ_IPU_D_FMT_RGB_OUT_OFT_LSB)
#define JZ4725B_IPU_RSZ_LUT_COEF_LSB 2
#define JZ4725B_IPU_RSZ_LUT_COEF_MASK 0x7ff
#define JZ4725B_IPU_RSZ_LUT_IN_EN BIT(1)
#define JZ4725B_IPU_RSZ_LUT_OUT_EN BIT(0)
#define JZ4760_IPU_RSZ_COEF20_LSB 6
#define JZ4760_IPU_RSZ_COEF31_LSB 17
#define JZ4760_IPU_RSZ_COEF_MASK 0x7ff
#define JZ4760_IPU_RSZ_OFFSET_LSB 1
#define JZ4760_IPU_RSZ_OFFSET_MASK 0x1f
#define JZ_IPU_CSC_OFFSET_CHROMA_LSB 16
#define JZ_IPU_CSC_OFFSET_LUMA_LSB 16
#endif /* DRIVERS_GPU_DRM_INGENIC_INGENIC_IPU_H */

View File

@ -69,6 +69,11 @@ static const uint32_t mxsfb_formats[] = {
DRM_FORMAT_RGB565
};
static const uint64_t mxsfb_modifiers[] = {
DRM_FORMAT_MOD_LINEAR,
DRM_FORMAT_MOD_INVALID
};
static struct mxsfb_drm_private *
drm_pipe_to_mxsfb_drm_private(struct drm_simple_display_pipe *pipe)
{
@ -191,7 +196,7 @@ static struct drm_simple_display_pipe_funcs mxsfb_funcs = {
.disable_vblank = mxsfb_pipe_disable_vblank,
};
static int mxsfb_load(struct drm_device *drm, unsigned long flags)
static int mxsfb_load(struct drm_device *drm)
{
struct platform_device *pdev = to_platform_device(drm->dev);
struct mxsfb_drm_private *mxsfb;
@ -244,8 +249,8 @@ static int mxsfb_load(struct drm_device *drm, unsigned long flags)
}
ret = drm_simple_display_pipe_init(drm, &mxsfb->pipe, &mxsfb_funcs,
mxsfb_formats, ARRAY_SIZE(mxsfb_formats), NULL,
mxsfb->connector);
mxsfb_formats, ARRAY_SIZE(mxsfb_formats),
mxsfb_modifiers, mxsfb->connector);
if (ret < 0) {
dev_err(drm->dev, "Cannot setup simple display pipe\n");
goto err_vblank;
@ -398,7 +403,7 @@ static int mxsfb_probe(struct platform_device *pdev)
if (IS_ERR(drm))
return PTR_ERR(drm);
ret = mxsfb_load(drm, 0);
ret = mxsfb_load(drm);
if (ret)
goto err_free;

View File

@ -44,6 +44,9 @@
#include <subdev/bios/pll.h>
#include <subdev/clk.h>
#include <nvif/event.h>
#include <nvif/cl0046.h>
static int
nv04_crtc_mode_set_base(struct drm_crtc *crtc, int x, int y,
struct drm_framebuffer *old_fb);
@ -756,6 +759,7 @@ static void nv_crtc_destroy(struct drm_crtc *crtc)
nouveau_bo_unmap(nv_crtc->cursor.nvbo);
nouveau_bo_unpin(nv_crtc->cursor.nvbo);
nouveau_bo_ref(NULL, &nv_crtc->cursor.nvbo);
nvif_notify_fini(&nv_crtc->vblank);
kfree(nv_crtc);
}
@ -1297,9 +1301,19 @@ create_primary_plane(struct drm_device *dev)
return primary;
}
static int nv04_crtc_vblank_handler(struct nvif_notify *notify)
{
struct nouveau_crtc *nv_crtc =
container_of(notify, struct nouveau_crtc, vblank);
drm_crtc_handle_vblank(&nv_crtc->base);
return NVIF_NOTIFY_KEEP;
}
int
nv04_crtc_create(struct drm_device *dev, int crtc_num)
{
struct nouveau_display *disp = nouveau_display(dev);
struct nouveau_crtc *nv_crtc;
int ret;
@ -1337,5 +1351,14 @@ nv04_crtc_create(struct drm_device *dev, int crtc_num)
nv04_cursor_init(nv_crtc);
return 0;
ret = nvif_notify_init(&disp->disp.object, nv04_crtc_vblank_handler,
false, NV04_DISP_NTFY_VBLANK,
&(struct nvif_notify_head_req_v0) {
.head = nv_crtc->index,
},
sizeof(struct nvif_notify_head_req_v0),
sizeof(struct nvif_notify_head_rep_v0),
&nv_crtc->vblank);
return ret;
}

View File

@ -10,6 +10,10 @@ nouveau-y += dispnv50/core917d.o
nouveau-y += dispnv50/corec37d.o
nouveau-y += dispnv50/corec57d.o
nouveau-$(CONFIG_DEBUG_FS) += dispnv50/crc.o
nouveau-$(CONFIG_DEBUG_FS) += dispnv50/crc907d.o
nouveau-$(CONFIG_DEBUG_FS) += dispnv50/crcc37d.o
nouveau-y += dispnv50/dac507d.o
nouveau-y += dispnv50/dac907d.o

View File

@ -2,6 +2,9 @@
#define __NV50_KMS_ATOM_H__
#define nv50_atom(p) container_of((p), struct nv50_atom, state)
#include <drm/drm_atomic.h>
#include "crc.h"
struct nouveau_encoder;
struct nv50_atom {
struct drm_atomic_state state;
@ -18,6 +21,7 @@ struct nv50_head_atom {
struct {
u32 mask;
u32 owned;
u32 olut;
} wndw;
@ -114,9 +118,12 @@ struct nv50_head_atom {
u8 nhsync:1;
u8 nvsync:1;
u8 depth:4;
u8 crc_raster:2;
u8 bpc;
} or;
struct nv50_crc_atom crc;
/* Currently only used for MST */
struct {
int pbn;
@ -134,6 +141,7 @@ struct nv50_head_atom {
bool ovly:1;
bool dither:1;
bool procamp:1;
bool crc:1;
bool or:1;
};
u16 mask;
@ -149,6 +157,19 @@ nv50_head_atom_get(struct drm_atomic_state *state, struct drm_crtc *crtc)
return nv50_head_atom(statec);
}
static inline struct drm_encoder *
nv50_head_atom_get_encoder(struct nv50_head_atom *atom)
{
struct drm_encoder *encoder = NULL;
/* We only ever have a single encoder */
drm_for_each_encoder_mask(encoder, atom->state.crtc->dev,
atom->state.encoder_mask)
break;
return encoder;
}
#define nv50_wndw_atom(p) container_of((p), struct nv50_wndw_atom, state)
struct nv50_wndw_atom {

View File

@ -2,6 +2,7 @@
#define __NV50_KMS_CORE_H__
#include "disp.h"
#include "atom.h"
#include "crc.h"
#include <nouveau_encoder.h>
struct nv50_core {
@ -26,6 +27,9 @@ struct nv50_core_func {
} wndw;
const struct nv50_head_func *head;
#if IS_ENABLED(CONFIG_DEBUG_FS)
const struct nv50_crc_func *crc;
#endif
const struct nv50_outp_func {
void (*ctrl)(struct nv50_core *, int or, u32 ctrl,
struct nv50_head_atom *);

View File

@ -30,6 +30,9 @@ core907d = {
.ntfy_wait_done = core507d_ntfy_wait_done,
.update = core507d_update,
.head = &head907d,
#if IS_ENABLED(CONFIG_DEBUG_FS)
.crc = &crc907d,
#endif
.dac = &dac907d,
.sor = &sor907d,
};

View File

@ -30,6 +30,9 @@ core917d = {
.ntfy_wait_done = core507d_ntfy_wait_done,
.update = core507d_update,
.head = &head917d,
#if IS_ENABLED(CONFIG_DEBUG_FS)
.crc = &crc907d,
#endif
.dac = &dac907d,
.sor = &sor907d,
};

View File

@ -142,6 +142,9 @@ corec37d = {
.wndw.owner = corec37d_wndw_owner,
.head = &headc37d,
.sor = &sorc37d,
#if IS_ENABLED(CONFIG_DEBUG_FS)
.crc = &crcc37d,
#endif
};
int

View File

@ -52,6 +52,9 @@ corec57d = {
.wndw.owner = corec37d_wndw_owner,
.head = &headc57d,
.sor = &sorc37d,
#if IS_ENABLED(CONFIG_DEBUG_FS)
.crc = &crcc37d,
#endif
};
int

View File

@ -0,0 +1,751 @@
// SPDX-License-Identifier: MIT
#include <linux/string.h>
#include <drm/drm_crtc.h>
#include <drm/drm_atomic_helper.h>
#include <drm/drm_vblank.h>
#include <drm/drm_vblank_work.h>
#include <nvif/class.h>
#include <nvif/cl0002.h>
#include <nvif/timer.h>
#include "nouveau_drv.h"
#include "core.h"
#include "head.h"
#include "wndw.h"
#include "handles.h"
#include "crc.h"
static const char * const nv50_crc_sources[] = {
[NV50_CRC_SOURCE_NONE] = "none",
[NV50_CRC_SOURCE_AUTO] = "auto",
[NV50_CRC_SOURCE_RG] = "rg",
[NV50_CRC_SOURCE_OUTP_ACTIVE] = "outp-active",
[NV50_CRC_SOURCE_OUTP_COMPLETE] = "outp-complete",
[NV50_CRC_SOURCE_OUTP_INACTIVE] = "outp-inactive",
};
static int nv50_crc_parse_source(const char *buf, enum nv50_crc_source *s)
{
int i;
if (!buf) {
*s = NV50_CRC_SOURCE_NONE;
return 0;
}
i = match_string(nv50_crc_sources, ARRAY_SIZE(nv50_crc_sources), buf);
if (i < 0)
return i;
*s = i;
return 0;
}
int
nv50_crc_verify_source(struct drm_crtc *crtc, const char *source_name,
size_t *values_cnt)
{
struct nouveau_drm *drm = nouveau_drm(crtc->dev);
enum nv50_crc_source source;
if (nv50_crc_parse_source(source_name, &source) < 0) {
NV_DEBUG(drm, "unknown source %s\n", source_name);
return -EINVAL;
}
*values_cnt = 1;
return 0;
}
const char *const *nv50_crc_get_sources(struct drm_crtc *crtc, size_t *count)
{
*count = ARRAY_SIZE(nv50_crc_sources);
return nv50_crc_sources;
}
static void
nv50_crc_program_ctx(struct nv50_head *head,
struct nv50_crc_notifier_ctx *ctx)
{
struct nv50_disp *disp = nv50_disp(head->base.base.dev);
struct nv50_core *core = disp->core;
u32 interlock[NV50_DISP_INTERLOCK__SIZE] = { 0 };
core->func->crc->set_ctx(head, ctx);
core->func->update(core, interlock, false);
}
static void nv50_crc_ctx_flip_work(struct kthread_work *base)
{
struct drm_vblank_work *work = to_drm_vblank_work(base);
struct nv50_crc *crc = container_of(work, struct nv50_crc, flip_work);
struct nv50_head *head = container_of(crc, struct nv50_head, crc);
struct drm_crtc *crtc = &head->base.base;
struct nv50_disp *disp = nv50_disp(crtc->dev);
u8 new_idx = crc->ctx_idx ^ 1;
/*
* We don't want to accidentally wait for longer then the vblank, so
* try again for the next vblank if we don't grab the lock
*/
if (!mutex_trylock(&disp->mutex)) {
DRM_DEV_DEBUG_KMS(crtc->dev->dev,
"Lock contended, delaying CRC ctx flip for head-%d\n",
head->base.index);
drm_vblank_work_schedule(work,
drm_crtc_vblank_count(crtc) + 1,
true);
return;
}
DRM_DEV_DEBUG_KMS(crtc->dev->dev,
"Flipping notifier ctx for head %d (%d -> %d)\n",
drm_crtc_index(crtc), crc->ctx_idx, new_idx);
nv50_crc_program_ctx(head, NULL);
nv50_crc_program_ctx(head, &crc->ctx[new_idx]);
mutex_unlock(&disp->mutex);
spin_lock_irq(&crc->lock);
crc->ctx_changed = true;
spin_unlock_irq(&crc->lock);
}
static inline void nv50_crc_reset_ctx(struct nv50_crc_notifier_ctx *ctx)
{
memset_io(ctx->mem.object.map.ptr, 0, ctx->mem.object.map.size);
}
static void
nv50_crc_get_entries(struct nv50_head *head,
const struct nv50_crc_func *func,
enum nv50_crc_source source)
{
struct drm_crtc *crtc = &head->base.base;
struct nv50_crc *crc = &head->crc;
u32 output_crc;
while (crc->entry_idx < func->num_entries) {
/*
* While Nvidia's documentation says CRCs are written on each
* subsequent vblank after being enabled, in practice they
* aren't written immediately.
*/
output_crc = func->get_entry(head, &crc->ctx[crc->ctx_idx],
source, crc->entry_idx);
if (!output_crc)
return;
drm_crtc_add_crc_entry(crtc, true, crc->frame, &output_crc);
crc->frame++;
crc->entry_idx++;
}
}
void nv50_crc_handle_vblank(struct nv50_head *head)
{
struct drm_crtc *crtc = &head->base.base;
struct nv50_crc *crc = &head->crc;
const struct nv50_crc_func *func =
nv50_disp(head->base.base.dev)->core->func->crc;
struct nv50_crc_notifier_ctx *ctx;
bool need_reschedule = false;
if (!func)
return;
/*
* We don't lose events if we aren't able to report CRCs until the
* next vblank, so only report CRCs if the locks we need aren't
* contended to prevent missing an actual vblank event
*/
if (!spin_trylock(&crc->lock))
return;
if (!crc->src)
goto out;
ctx = &crc->ctx[crc->ctx_idx];
if (crc->ctx_changed && func->ctx_finished(head, ctx)) {
nv50_crc_get_entries(head, func, crc->src);
crc->ctx_idx ^= 1;
crc->entry_idx = 0;
crc->ctx_changed = false;
/*
* Unfortunately when notifier contexts are changed during CRC
* capture, we will inevitably lose the CRC entry for the
* frame where the hardware actually latched onto the first
* UPDATE. According to Nvidia's hardware engineers, there's
* no workaround for this.
*
* Now, we could try to be smart here and calculate the number
* of missed CRCs based on audit timestamps, but those were
* removed starting with volta. Since we always flush our
* updates back-to-back without waiting, we'll just be
* optimistic and assume we always miss exactly one frame.
*/
DRM_DEV_DEBUG_KMS(head->base.base.dev->dev,
"Notifier ctx flip for head-%d finished, lost CRC for frame %llu\n",
head->base.index, crc->frame);
crc->frame++;
nv50_crc_reset_ctx(ctx);
need_reschedule = true;
}
nv50_crc_get_entries(head, func, crc->src);
if (need_reschedule)
drm_vblank_work_schedule(&crc->flip_work,
drm_crtc_vblank_count(crtc)
+ crc->flip_threshold
- crc->entry_idx,
true);
out:
spin_unlock(&crc->lock);
}
static void nv50_crc_wait_ctx_finished(struct nv50_head *head,
const struct nv50_crc_func *func,
struct nv50_crc_notifier_ctx *ctx)
{
struct drm_device *dev = head->base.base.dev;
struct nouveau_drm *drm = nouveau_drm(dev);
s64 ret;
ret = nvif_msec(&drm->client.device, 50,
if (func->ctx_finished(head, ctx)) break;);
if (ret == -ETIMEDOUT)
NV_ERROR(drm,
"CRC notifier ctx for head %d not finished after 50ms\n",
head->base.index);
else if (ret)
NV_ATOMIC(drm,
"CRC notifier ctx for head-%d finished after %lldns\n",
head->base.index, ret);
}
void nv50_crc_atomic_stop_reporting(struct drm_atomic_state *state)
{
struct drm_crtc_state *crtc_state;
struct drm_crtc *crtc;
int i;
for_each_new_crtc_in_state(state, crtc, crtc_state, i) {
struct nv50_head *head = nv50_head(crtc);
struct nv50_head_atom *asyh = nv50_head_atom(crtc_state);
struct nv50_crc *crc = &head->crc;
if (!asyh->clr.crc)
continue;
spin_lock_irq(&crc->lock);
crc->src = NV50_CRC_SOURCE_NONE;
spin_unlock_irq(&crc->lock);
drm_crtc_vblank_put(crtc);
drm_vblank_work_cancel_sync(&crc->flip_work);
NV_ATOMIC(nouveau_drm(crtc->dev),
"CRC reporting on vblank for head-%d disabled\n",
head->base.index);
/* CRC generation is still enabled in hw, we'll just report
* any remaining CRC entries ourselves after it gets disabled
* in hardware
*/
}
}
void nv50_crc_atomic_init_notifier_contexts(struct drm_atomic_state *state)
{
struct drm_crtc_state *new_crtc_state;
struct drm_crtc *crtc;
int i;
for_each_new_crtc_in_state(state, crtc, new_crtc_state, i) {
struct nv50_head *head = nv50_head(crtc);
struct nv50_head_atom *asyh = nv50_head_atom(new_crtc_state);
struct nv50_crc *crc = &head->crc;
int i;
if (!asyh->set.crc)
continue;
crc->entry_idx = 0;
crc->ctx_changed = false;
for (i = 0; i < ARRAY_SIZE(crc->ctx); i++)
nv50_crc_reset_ctx(&crc->ctx[i]);
}
}
void nv50_crc_atomic_release_notifier_contexts(struct drm_atomic_state *state)
{
const struct nv50_crc_func *func =
nv50_disp(state->dev)->core->func->crc;
struct drm_crtc_state *new_crtc_state;
struct drm_crtc *crtc;
int i;
for_each_new_crtc_in_state(state, crtc, new_crtc_state, i) {
struct nv50_head *head = nv50_head(crtc);
struct nv50_head_atom *asyh = nv50_head_atom(new_crtc_state);
struct nv50_crc *crc = &head->crc;
struct nv50_crc_notifier_ctx *ctx = &crc->ctx[crc->ctx_idx];
if (!asyh->clr.crc)
continue;
if (crc->ctx_changed) {
nv50_crc_wait_ctx_finished(head, func, ctx);
ctx = &crc->ctx[crc->ctx_idx ^ 1];
}
nv50_crc_wait_ctx_finished(head, func, ctx);
}
}
void nv50_crc_atomic_start_reporting(struct drm_atomic_state *state)
{
struct drm_crtc_state *crtc_state;
struct drm_crtc *crtc;
int i;
for_each_new_crtc_in_state(state, crtc, crtc_state, i) {
struct nv50_head *head = nv50_head(crtc);
struct nv50_head_atom *asyh = nv50_head_atom(crtc_state);
struct nv50_crc *crc = &head->crc;
u64 vbl_count;
if (!asyh->set.crc)
continue;
drm_crtc_vblank_get(crtc);
spin_lock_irq(&crc->lock);
vbl_count = drm_crtc_vblank_count(crtc);
crc->frame = vbl_count;
crc->src = asyh->crc.src;
drm_vblank_work_schedule(&crc->flip_work,
vbl_count + crc->flip_threshold,
true);
spin_unlock_irq(&crc->lock);
NV_ATOMIC(nouveau_drm(crtc->dev),
"CRC reporting on vblank for head-%d enabled\n",
head->base.index);
}
}
int nv50_crc_atomic_check_head(struct nv50_head *head,
struct nv50_head_atom *asyh,
struct nv50_head_atom *armh)
{
struct nv50_atom *atom = nv50_atom(asyh->state.state);
struct drm_device *dev = head->base.base.dev;
struct nv50_disp *disp = nv50_disp(dev);
bool changed = armh->crc.src != asyh->crc.src;
if (!armh->crc.src && !asyh->crc.src) {
asyh->set.crc = false;
asyh->clr.crc = false;
return 0;
}
/* While we don't care about entry tags, Volta+ hw always needs the
* controlling wndw channel programmed to a wndw that's owned by our
* head
*/
if (asyh->crc.src && disp->disp->object.oclass >= GV100_DISP &&
!(BIT(asyh->crc.wndw) & asyh->wndw.owned)) {
if (!asyh->wndw.owned) {
/* TODO: once we support flexible channel ownership,
* we should write some code here to handle attempting
* to "steal" a plane: e.g. take a plane that is
* currently not-visible and owned by another head,
* and reassign it to this head. If we fail to do so,
* we shuld reject the mode outright as CRC capture
* then becomes impossible.
*/
NV_ATOMIC(nouveau_drm(dev),
"No available wndws for CRC readback\n");
return -EINVAL;
}
asyh->crc.wndw = ffs(asyh->wndw.owned) - 1;
}
if (drm_atomic_crtc_needs_modeset(&asyh->state) || changed ||
armh->crc.wndw != asyh->crc.wndw) {
asyh->clr.crc = armh->crc.src && armh->state.active;
asyh->set.crc = asyh->crc.src && asyh->state.active;
if (changed)
asyh->set.or |= armh->or.crc_raster !=
asyh->or.crc_raster;
if (asyh->clr.crc && asyh->set.crc)
atom->flush_disable = true;
} else {
asyh->set.crc = false;
asyh->clr.crc = false;
}
return 0;
}
void nv50_crc_atomic_check_outp(struct nv50_atom *atom)
{
struct drm_crtc *crtc;
struct drm_crtc_state *old_crtc_state, *new_crtc_state;
int i;
if (atom->flush_disable)
return;
for_each_oldnew_crtc_in_state(&atom->state, crtc, old_crtc_state,
new_crtc_state, i) {
struct nv50_head_atom *armh = nv50_head_atom(old_crtc_state);
struct nv50_head_atom *asyh = nv50_head_atom(new_crtc_state);
struct nv50_outp_atom *outp_atom;
struct nouveau_encoder *outp =
nv50_real_outp(nv50_head_atom_get_encoder(armh));
struct drm_encoder *encoder = &outp->base.base;
if (!asyh->clr.crc)
continue;
/*
* Re-programming ORs can't be done in the same flush as
* disabling CRCs
*/
list_for_each_entry(outp_atom, &atom->outp, head) {
if (outp_atom->encoder == encoder) {
if (outp_atom->set.mask) {
atom->flush_disable = true;
return;
} else {
break;
}
}
}
}
}
static enum nv50_crc_source_type
nv50_crc_source_type(struct nouveau_encoder *outp,
enum nv50_crc_source source)
{
struct dcb_output *dcbe = outp->dcb;
switch (source) {
case NV50_CRC_SOURCE_NONE: return NV50_CRC_SOURCE_TYPE_NONE;
case NV50_CRC_SOURCE_RG: return NV50_CRC_SOURCE_TYPE_RG;
default: break;
}
if (dcbe->location != DCB_LOC_ON_CHIP)
return NV50_CRC_SOURCE_TYPE_PIOR;
switch (dcbe->type) {
case DCB_OUTPUT_DP: return NV50_CRC_SOURCE_TYPE_SF;
case DCB_OUTPUT_ANALOG: return NV50_CRC_SOURCE_TYPE_DAC;
default: return NV50_CRC_SOURCE_TYPE_SOR;
}
}
void nv50_crc_atomic_set(struct nv50_head *head,
struct nv50_head_atom *asyh)
{
struct drm_crtc *crtc = &head->base.base;
struct drm_device *dev = crtc->dev;
struct nv50_crc *crc = &head->crc;
const struct nv50_crc_func *func = nv50_disp(dev)->core->func->crc;
struct nouveau_encoder *outp =
nv50_real_outp(nv50_head_atom_get_encoder(asyh));
func->set_src(head, outp->or,
nv50_crc_source_type(outp, asyh->crc.src),
&crc->ctx[crc->ctx_idx], asyh->crc.wndw);
}
void nv50_crc_atomic_clr(struct nv50_head *head)
{
const struct nv50_crc_func *func =
nv50_disp(head->base.base.dev)->core->func->crc;
func->set_src(head, 0, NV50_CRC_SOURCE_TYPE_NONE, NULL, 0);
}
#define NV50_CRC_RASTER_ACTIVE 0
#define NV50_CRC_RASTER_COMPLETE 1
#define NV50_CRC_RASTER_INACTIVE 2
static inline int
nv50_crc_raster_type(enum nv50_crc_source source)
{
switch (source) {
case NV50_CRC_SOURCE_NONE:
case NV50_CRC_SOURCE_AUTO:
case NV50_CRC_SOURCE_RG:
case NV50_CRC_SOURCE_OUTP_ACTIVE:
return NV50_CRC_RASTER_ACTIVE;
case NV50_CRC_SOURCE_OUTP_COMPLETE:
return NV50_CRC_RASTER_COMPLETE;
case NV50_CRC_SOURCE_OUTP_INACTIVE:
return NV50_CRC_RASTER_INACTIVE;
}
return 0;
}
/* We handle mapping the memory for CRC notifiers ourselves, since each
* notifier needs it's own handle
*/
static inline int
nv50_crc_ctx_init(struct nv50_head *head, struct nvif_mmu *mmu,
struct nv50_crc_notifier_ctx *ctx, size_t len, int idx)
{
struct nv50_core *core = nv50_disp(head->base.base.dev)->core;
int ret;
ret = nvif_mem_init_map(mmu, NVIF_MEM_VRAM, len, &ctx->mem);
if (ret)
return ret;
ret = nvif_object_init(&core->chan.base.user,
NV50_DISP_HANDLE_CRC_CTX(head, idx),
NV_DMA_IN_MEMORY,
&(struct nv_dma_v0) {
.target = NV_DMA_V0_TARGET_VRAM,
.access = NV_DMA_V0_ACCESS_RDWR,
.start = ctx->mem.addr,
.limit = ctx->mem.addr
+ ctx->mem.size - 1,
}, sizeof(struct nv_dma_v0),
&ctx->ntfy);
if (ret)
goto fail_fini;
return 0;
fail_fini:
nvif_mem_fini(&ctx->mem);
return ret;
}
static inline void
nv50_crc_ctx_fini(struct nv50_crc_notifier_ctx *ctx)
{
nvif_object_fini(&ctx->ntfy);
nvif_mem_fini(&ctx->mem);
}
int nv50_crc_set_source(struct drm_crtc *crtc, const char *source_str)
{
struct drm_device *dev = crtc->dev;
struct drm_atomic_state *state;
struct drm_modeset_acquire_ctx ctx;
struct nv50_head *head = nv50_head(crtc);
struct nv50_crc *crc = &head->crc;
const struct nv50_crc_func *func = nv50_disp(dev)->core->func->crc;
struct nvif_mmu *mmu = &nouveau_drm(dev)->client.mmu;
struct nv50_head_atom *asyh;
struct drm_crtc_state *crtc_state;
enum nv50_crc_source source;
int ret = 0, ctx_flags = 0, i;
ret = nv50_crc_parse_source(source_str, &source);
if (ret)
return ret;
/*
* Since we don't want the user to accidentally interrupt us as we're
* disabling CRCs
*/
if (source)
ctx_flags |= DRM_MODESET_ACQUIRE_INTERRUPTIBLE;
drm_modeset_acquire_init(&ctx, ctx_flags);
state = drm_atomic_state_alloc(dev);
if (!state) {
ret = -ENOMEM;
goto out_acquire_fini;
}
state->acquire_ctx = &ctx;
if (source) {
for (i = 0; i < ARRAY_SIZE(head->crc.ctx); i++) {
ret = nv50_crc_ctx_init(head, mmu, &crc->ctx[i],
func->notifier_len, i);
if (ret)
goto out_ctx_fini;
}
}
retry:
crtc_state = drm_atomic_get_crtc_state(state, &head->base.base);
if (IS_ERR(crtc_state)) {
ret = PTR_ERR(crtc_state);
if (ret == -EDEADLK)
goto deadlock;
else if (ret)
goto out_drop_locks;
}
asyh = nv50_head_atom(crtc_state);
asyh->crc.src = source;
asyh->or.crc_raster = nv50_crc_raster_type(source);
ret = drm_atomic_commit(state);
if (ret == -EDEADLK)
goto deadlock;
else if (ret)
goto out_drop_locks;
if (!source) {
/*
* If the user specified a custom flip threshold through
* debugfs, reset it
*/
crc->flip_threshold = func->flip_threshold;
}
out_drop_locks:
drm_modeset_drop_locks(&ctx);
out_ctx_fini:
if (!source || ret) {
for (i = 0; i < ARRAY_SIZE(crc->ctx); i++)
nv50_crc_ctx_fini(&crc->ctx[i]);
}
drm_atomic_state_put(state);
out_acquire_fini:
drm_modeset_acquire_fini(&ctx);
return ret;
deadlock:
drm_atomic_state_clear(state);
drm_modeset_backoff(&ctx);
goto retry;
}
static int
nv50_crc_debugfs_flip_threshold_get(struct seq_file *m, void *data)
{
struct nv50_head *head = m->private;
struct drm_crtc *crtc = &head->base.base;
struct nv50_crc *crc = &head->crc;
int ret;
ret = drm_modeset_lock_single_interruptible(&crtc->mutex);
if (ret)
return ret;
seq_printf(m, "%d\n", crc->flip_threshold);
drm_modeset_unlock(&crtc->mutex);
return ret;
}
static int
nv50_crc_debugfs_flip_threshold_open(struct inode *inode, struct file *file)
{
return single_open(file, nv50_crc_debugfs_flip_threshold_get,
inode->i_private);
}
static ssize_t
nv50_crc_debugfs_flip_threshold_set(struct file *file,
const char __user *ubuf, size_t len,
loff_t *offp)
{
struct seq_file *m = file->private_data;
struct nv50_head *head = m->private;
struct nv50_head_atom *armh;
struct drm_crtc *crtc = &head->base.base;
struct nouveau_drm *drm = nouveau_drm(crtc->dev);
struct nv50_crc *crc = &head->crc;
const struct nv50_crc_func *func =
nv50_disp(crtc->dev)->core->func->crc;
int value, ret;
ret = kstrtoint_from_user(ubuf, len, 10, &value);
if (ret)
return ret;
if (value > func->flip_threshold)
return -EINVAL;
else if (value == -1)
value = func->flip_threshold;
else if (value < -1)
return -EINVAL;
ret = drm_modeset_lock_single_interruptible(&crtc->mutex);
if (ret)
return ret;
armh = nv50_head_atom(crtc->state);
if (armh->crc.src) {
ret = -EBUSY;
goto out;
}
NV_DEBUG(drm,
"Changing CRC flip threshold for next capture on head-%d to %d\n",
head->base.index, value);
crc->flip_threshold = value;
ret = len;
out:
drm_modeset_unlock(&crtc->mutex);
return ret;
}
static const struct file_operations nv50_crc_flip_threshold_fops = {
.owner = THIS_MODULE,
.open = nv50_crc_debugfs_flip_threshold_open,
.read = seq_read,
.write = nv50_crc_debugfs_flip_threshold_set,
};
int nv50_head_crc_late_register(struct nv50_head *head)
{
struct drm_crtc *crtc = &head->base.base;
const struct nv50_crc_func *func =
nv50_disp(crtc->dev)->core->func->crc;
struct dentry *root;
if (!func || !crtc->debugfs_entry)
return 0;
root = debugfs_create_dir("nv_crc", crtc->debugfs_entry);
debugfs_create_file("flip_threshold", 0644, root, head,
&nv50_crc_flip_threshold_fops);
return 0;
}
static inline void
nv50_crc_init_head(struct nv50_disp *disp, const struct nv50_crc_func *func,
struct nv50_head *head)
{
struct nv50_crc *crc = &head->crc;
crc->flip_threshold = func->flip_threshold;
spin_lock_init(&crc->lock);
drm_vblank_work_init(&crc->flip_work, &head->base.base,
nv50_crc_ctx_flip_work);
}
void nv50_crc_init(struct drm_device *dev)
{
struct nv50_disp *disp = nv50_disp(dev);
struct drm_crtc *crtc;
const struct nv50_crc_func *func = disp->core->func->crc;
if (!func)
return;
drm_for_each_crtc(crtc, dev)
nv50_crc_init_head(disp, func, nv50_head(crtc));
}

View File

@ -0,0 +1,131 @@
/* SPDX-License-Identifier: MIT */
#ifndef __NV50_CRC_H__
#define __NV50_CRC_H__
#include <linux/mutex.h>
#include <drm/drm_crtc.h>
#include <drm/drm_vblank_work.h>
#include <nvif/mem.h>
#include <nvkm/subdev/bios.h>
#include "nouveau_encoder.h"
struct nv50_atom;
struct nv50_disp;
struct nv50_head;
#if IS_ENABLED(CONFIG_DEBUG_FS)
enum nv50_crc_source {
NV50_CRC_SOURCE_NONE = 0,
NV50_CRC_SOURCE_AUTO,
NV50_CRC_SOURCE_RG,
NV50_CRC_SOURCE_OUTP_ACTIVE,
NV50_CRC_SOURCE_OUTP_COMPLETE,
NV50_CRC_SOURCE_OUTP_INACTIVE,
};
/* RG -> SF (DP only)
* -> SOR
* -> PIOR
* -> DAC
*/
enum nv50_crc_source_type {
NV50_CRC_SOURCE_TYPE_NONE = 0,
NV50_CRC_SOURCE_TYPE_SOR,
NV50_CRC_SOURCE_TYPE_PIOR,
NV50_CRC_SOURCE_TYPE_DAC,
NV50_CRC_SOURCE_TYPE_RG,
NV50_CRC_SOURCE_TYPE_SF,
};
struct nv50_crc_notifier_ctx {
struct nvif_mem mem;
struct nvif_object ntfy;
};
struct nv50_crc_atom {
enum nv50_crc_source src;
/* Only used for gv100+ */
u8 wndw : 4;
};
struct nv50_crc_func {
void (*set_src)(struct nv50_head *, int or, enum nv50_crc_source_type,
struct nv50_crc_notifier_ctx *, u32 wndw);
void (*set_ctx)(struct nv50_head *, struct nv50_crc_notifier_ctx *);
u32 (*get_entry)(struct nv50_head *, struct nv50_crc_notifier_ctx *,
enum nv50_crc_source, int idx);
bool (*ctx_finished)(struct nv50_head *,
struct nv50_crc_notifier_ctx *);
short flip_threshold;
short num_entries;
size_t notifier_len;
};
struct nv50_crc {
spinlock_t lock;
struct nv50_crc_notifier_ctx ctx[2];
struct drm_vblank_work flip_work;
enum nv50_crc_source src;
u64 frame;
short entry_idx;
short flip_threshold;
u8 ctx_idx : 1;
bool ctx_changed : 1;
};
void nv50_crc_init(struct drm_device *dev);
int nv50_head_crc_late_register(struct nv50_head *);
void nv50_crc_handle_vblank(struct nv50_head *head);
int nv50_crc_verify_source(struct drm_crtc *, const char *, size_t *);
const char *const *nv50_crc_get_sources(struct drm_crtc *, size_t *);
int nv50_crc_set_source(struct drm_crtc *, const char *);
int nv50_crc_atomic_check_head(struct nv50_head *, struct nv50_head_atom *,
struct nv50_head_atom *);
void nv50_crc_atomic_check_outp(struct nv50_atom *atom);
void nv50_crc_atomic_stop_reporting(struct drm_atomic_state *);
void nv50_crc_atomic_init_notifier_contexts(struct drm_atomic_state *);
void nv50_crc_atomic_release_notifier_contexts(struct drm_atomic_state *);
void nv50_crc_atomic_start_reporting(struct drm_atomic_state *);
void nv50_crc_atomic_set(struct nv50_head *, struct nv50_head_atom *);
void nv50_crc_atomic_clr(struct nv50_head *);
extern const struct nv50_crc_func crc907d;
extern const struct nv50_crc_func crcc37d;
#else /* IS_ENABLED(CONFIG_DEBUG_FS) */
struct nv50_crc {};
struct nv50_crc_func {};
struct nv50_crc_atom {};
#define nv50_crc_verify_source NULL
#define nv50_crc_get_sources NULL
#define nv50_crc_set_source NULL
static inline void nv50_crc_init(struct drm_device *dev) {}
static inline int nv50_head_crc_late_register(struct nv50_head *) {}
static inline void
nv50_crc_handle_vblank(struct nv50_head *head) { return 0; }
static inline int
nv50_crc_atomic_check_head(struct nv50_head *, struct nv50_head_atom *,
struct nv50_head_atom *) {}
static inline void nv50_crc_atomic_check_outp(struct nv50_atom *atom) {}
static inline void
nv50_crc_atomic_stop_reporting(struct drm_atomic_state *) {}
static inline void
nv50_crc_atomic_init_notifier_contexts(struct drm_atomic_state *) {}
static inline void
nv50_crc_atomic_release_notifier_contexts(struct drm_atomic_state *) {}
static inline void
nv50_crc_atomic_start_reporting(struct drm_atomic_state *) {}
static inline void
nv50_crc_atomic_set(struct nv50_head *, struct nv50_head_atom *) {}
static inline void
nv50_crc_atomic_clr(struct nv50_head *) {}
#endif /* IS_ENABLED(CONFIG_DEBUG_FS) */
#endif /* !__NV50_CRC_H__ */

View File

@ -0,0 +1,139 @@
// SPDX-License-Identifier: MIT
#include <drm/drm_crtc.h>
#include "crc.h"
#include "core.h"
#include "disp.h"
#include "head.h"
#define CRC907D_MAX_ENTRIES 255
struct crc907d_notifier {
u32 status;
u32 :32; /* reserved */
struct crc907d_entry {
u32 status;
u32 compositor_crc;
u32 output_crc[2];
} entries[CRC907D_MAX_ENTRIES];
} __packed;
static void
crc907d_set_src(struct nv50_head *head, int or,
enum nv50_crc_source_type source,
struct nv50_crc_notifier_ctx *ctx, u32 wndw)
{
struct drm_crtc *crtc = &head->base.base;
struct nv50_dmac *core = &nv50_disp(head->base.base.dev)->core->chan;
const u32 hoff = head->base.index * 0x300;
u32 *push;
u32 crc_args = 0xfff00000;
switch (source) {
case NV50_CRC_SOURCE_TYPE_SOR:
crc_args |= (0x00000f0f + or * 16) << 8;
break;
case NV50_CRC_SOURCE_TYPE_PIOR:
crc_args |= (0x000000ff + or * 256) << 8;
break;
case NV50_CRC_SOURCE_TYPE_DAC:
crc_args |= (0x00000ff0 + or) << 8;
break;
case NV50_CRC_SOURCE_TYPE_RG:
crc_args |= (0x00000ff8 + drm_crtc_index(crtc)) << 8;
break;
case NV50_CRC_SOURCE_TYPE_SF:
crc_args |= (0x00000f8f + drm_crtc_index(crtc) * 16) << 8;
break;
case NV50_CRC_SOURCE_NONE:
crc_args |= 0x000fff00;
break;
}
push = evo_wait(core, 4);
if (!push)
return;
if (source) {
evo_mthd(push, 0x0438 + hoff, 1);
evo_data(push, ctx->ntfy.handle);
evo_mthd(push, 0x0430 + hoff, 1);
evo_data(push, crc_args);
} else {
evo_mthd(push, 0x0430 + hoff, 1);
evo_data(push, crc_args);
evo_mthd(push, 0x0438 + hoff, 1);
evo_data(push, 0);
}
evo_kick(push, core);
}
static void crc907d_set_ctx(struct nv50_head *head,
struct nv50_crc_notifier_ctx *ctx)
{
struct nv50_dmac *core = &nv50_disp(head->base.base.dev)->core->chan;
u32 *push = evo_wait(core, 2);
if (!push)
return;
evo_mthd(push, 0x0438 + (head->base.index * 0x300), 1);
evo_data(push, ctx ? ctx->ntfy.handle : 0);
evo_kick(push, core);
}
static u32 crc907d_get_entry(struct nv50_head *head,
struct nv50_crc_notifier_ctx *ctx,
enum nv50_crc_source source, int idx)
{
struct crc907d_notifier __iomem *notifier = ctx->mem.object.map.ptr;
return ioread32_native(&notifier->entries[idx].output_crc[0]);
}
static bool crc907d_ctx_finished(struct nv50_head *head,
struct nv50_crc_notifier_ctx *ctx)
{
struct nouveau_drm *drm = nouveau_drm(head->base.base.dev);
struct crc907d_notifier __iomem *notifier = ctx->mem.object.map.ptr;
const u32 status = ioread32_native(&notifier->status);
const u32 overflow = status & 0x0000003e;
if (!(status & 0x00000001))
return false;
if (overflow) {
const char *engine = NULL;
switch (overflow) {
case 0x00000004: engine = "DSI"; break;
case 0x00000008: engine = "Compositor"; break;
case 0x00000010: engine = "CRC output 1"; break;
case 0x00000020: engine = "CRC output 2"; break;
}
if (engine)
NV_ERROR(drm,
"CRC notifier context for head %d overflowed on %s: %x\n",
head->base.index, engine, status);
else
NV_ERROR(drm,
"CRC notifier context for head %d overflowed: %x\n",
head->base.index, status);
}
NV_DEBUG(drm, "Head %d CRC context status: %x\n",
head->base.index, status);
return true;
}
const struct nv50_crc_func crc907d = {
.set_src = crc907d_set_src,
.set_ctx = crc907d_set_ctx,
.get_entry = crc907d_get_entry,
.ctx_finished = crc907d_ctx_finished,
.flip_threshold = CRC907D_MAX_ENTRIES - 10,
.num_entries = CRC907D_MAX_ENTRIES,
.notifier_len = sizeof(struct crc907d_notifier),
};

View File

@ -0,0 +1,153 @@
// SPDX-License-Identifier: MIT
#include <drm/drm_crtc.h>
#include "crc.h"
#include "core.h"
#include "disp.h"
#include "head.h"
#define CRCC37D_MAX_ENTRIES 2047
struct crcc37d_notifier {
u32 status;
/* reserved */
u32 :32;
u32 :32;
u32 :32;
u32 :32;
u32 :32;
u32 :32;
u32 :32;
struct crcc37d_entry {
u32 status[2];
u32 :32; /* reserved */
u32 compositor_crc;
u32 rg_crc;
u32 output_crc[2];
u32 :32; /* reserved */
} entries[CRCC37D_MAX_ENTRIES];
} __packed;
static void
crcc37d_set_src(struct nv50_head *head, int or,
enum nv50_crc_source_type source,
struct nv50_crc_notifier_ctx *ctx, u32 wndw)
{
struct nv50_dmac *core = &nv50_disp(head->base.base.dev)->core->chan;
const u32 hoff = head->base.index * 0x400;
u32 *push;
u32 crc_args;
switch (source) {
case NV50_CRC_SOURCE_TYPE_SOR:
crc_args = (0x00000050 + or) << 12;
break;
case NV50_CRC_SOURCE_TYPE_PIOR:
crc_args = (0x00000060 + or) << 12;
break;
case NV50_CRC_SOURCE_TYPE_SF:
crc_args = 0x00000030 << 12;
break;
default:
crc_args = 0;
break;
}
push = evo_wait(core, 4);
if (!push)
return;
if (source) {
evo_mthd(push, 0x2180 + hoff, 1);
evo_data(push, ctx->ntfy.handle);
evo_mthd(push, 0x2184 + hoff, 1);
evo_data(push, crc_args | wndw);
} else {
evo_mthd(push, 0x2184 + hoff, 1);
evo_data(push, 0);
evo_mthd(push, 0x2180 + hoff, 1);
evo_data(push, 0);
}
evo_kick(push, core);
}
static void crcc37d_set_ctx(struct nv50_head *head,
struct nv50_crc_notifier_ctx *ctx)
{
struct nv50_dmac *core = &nv50_disp(head->base.base.dev)->core->chan;
u32 *push = evo_wait(core, 2);
if (!push)
return;
evo_mthd(push, 0x2180 + (head->base.index * 0x400), 1);
evo_data(push, ctx ? ctx->ntfy.handle : 0);
evo_kick(push, core);
}
static u32 crcc37d_get_entry(struct nv50_head *head,
struct nv50_crc_notifier_ctx *ctx,
enum nv50_crc_source source, int idx)
{
struct crcc37d_notifier __iomem *notifier = ctx->mem.object.map.ptr;
struct crcc37d_entry __iomem *entry = &notifier->entries[idx];
u32 __iomem *crc_addr;
if (source == NV50_CRC_SOURCE_RG)
crc_addr = &entry->rg_crc;
else
crc_addr = &entry->output_crc[0];
return ioread32_native(crc_addr);
}
static bool crcc37d_ctx_finished(struct nv50_head *head,
struct nv50_crc_notifier_ctx *ctx)
{
struct nouveau_drm *drm = nouveau_drm(head->base.base.dev);
struct crcc37d_notifier __iomem *notifier = ctx->mem.object.map.ptr;
const u32 status = ioread32_native(&notifier->status);
const u32 overflow = status & 0x0000007e;
if (!(status & 0x00000001))
return false;
if (overflow) {
const char *engine = NULL;
switch (overflow) {
case 0x00000004: engine = "Front End"; break;
case 0x00000008: engine = "Compositor"; break;
case 0x00000010: engine = "RG"; break;
case 0x00000020: engine = "CRC output 1"; break;
case 0x00000040: engine = "CRC output 2"; break;
}
if (engine)
NV_ERROR(drm,
"CRC notifier context for head %d overflowed on %s: %x\n",
head->base.index, engine, status);
else
NV_ERROR(drm,
"CRC notifier context for head %d overflowed: %x\n",
head->base.index, status);
}
NV_DEBUG(drm, "Head %d CRC context status: %x\n",
head->base.index, status);
return true;
}
const struct nv50_crc_func crcc37d = {
.set_src = crcc37d_set_src,
.set_ctx = crcc37d_set_ctx,
.get_entry = crcc37d_get_entry,
.ctx_finished = crcc37d_ctx_finished,
.flip_threshold = CRCC37D_MAX_ENTRIES - 30,
.num_entries = CRCC37D_MAX_ENTRIES,
.notifier_len = sizeof(struct crcc37d_notifier),
};

View File

@ -26,6 +26,7 @@
#include "core.h"
#include "head.h"
#include "wndw.h"
#include "handles.h"
#include <linux/dma-mapping.h>
#include <linux/hdmi.h>
@ -57,24 +58,6 @@
#include <subdev/bios/dp.h>
/******************************************************************************
* Atomic state
*****************************************************************************/
struct nv50_outp_atom {
struct list_head head;
struct drm_encoder *encoder;
bool flush_disable;
union nv50_outp_atom_mask {
struct {
bool ctrl:1;
};
u8 mask;
} set, clr;
};
/******************************************************************************
* EVO channel
*****************************************************************************/
@ -172,7 +155,8 @@ nv50_dmac_create(struct nvif_device *device, struct nvif_object *disp,
if (!syncbuf)
return 0;
ret = nvif_object_init(&dmac->base.user, 0xf0000000, NV_DMA_IN_MEMORY,
ret = nvif_object_init(&dmac->base.user, NV50_DISP_HANDLE_SYNCBUF,
NV_DMA_IN_MEMORY,
&(struct nv_dma_v0) {
.target = NV_DMA_V0_TARGET_VRAM,
.access = NV_DMA_V0_ACCESS_RDWR,
@ -183,7 +167,8 @@ nv50_dmac_create(struct nvif_device *device, struct nvif_object *disp,
if (ret)
return ret;
ret = nvif_object_init(&dmac->base.user, 0xf0000001, NV_DMA_IN_MEMORY,
ret = nvif_object_init(&dmac->base.user, NV50_DISP_HANDLE_VRAM,
NV_DMA_IN_MEMORY,
&(struct nv_dma_v0) {
.target = NV_DMA_V0_TARGET_VRAM,
.access = NV_DMA_V0_ACCESS_RDWR,
@ -798,6 +783,19 @@ struct nv50_msto {
bool disabled;
};
struct nouveau_encoder *nv50_real_outp(struct drm_encoder *encoder)
{
struct nv50_msto *msto;
if (encoder->encoder_type != DRM_MODE_ENCODER_DPMST)
return nouveau_encoder(encoder);
msto = nv50_msto(encoder);
if (!msto->mstc)
return NULL;
return msto->mstc->mstm->outp;
}
static struct drm_dp_payload *
nv50_msto_payload(struct nv50_msto *msto)
{
@ -1945,8 +1943,10 @@ nv50_disp_atomic_commit_tail(struct drm_atomic_state *state)
struct nv50_outp_atom *outp, *outt;
u32 interlock[NV50_DISP_INTERLOCK__SIZE] = {};
int i;
bool flushed = false;
NV_ATOMIC(drm, "commit %d %d\n", atom->lock_core, atom->flush_disable);
nv50_crc_atomic_stop_reporting(state);
drm_atomic_helper_wait_for_fences(dev, state, false);
drm_atomic_helper_wait_for_dependencies(state);
drm_atomic_helper_update_legacy_modeset_state(dev, state);
@ -2004,6 +2004,8 @@ nv50_disp_atomic_commit_tail(struct drm_atomic_state *state)
nv50_disp_atomic_commit_wndw(state, interlock);
nv50_disp_atomic_commit_core(state, interlock);
memset(interlock, 0x00, sizeof(interlock));
flushed = true;
}
}
}
@ -2014,9 +2016,15 @@ nv50_disp_atomic_commit_tail(struct drm_atomic_state *state)
nv50_disp_atomic_commit_wndw(state, interlock);
nv50_disp_atomic_commit_core(state, interlock);
memset(interlock, 0x00, sizeof(interlock));
flushed = true;
}
}
if (flushed)
nv50_crc_atomic_release_notifier_contexts(state);
nv50_crc_atomic_init_notifier_contexts(state);
/* Update output path(s). */
list_for_each_entry_safe(outp, outt, &atom->outp, head) {
const struct drm_encoder_helper_funcs *help;
@ -2130,6 +2138,9 @@ nv50_disp_atomic_commit_tail(struct drm_atomic_state *state)
}
}
nv50_crc_atomic_start_reporting(state);
if (!flushed)
nv50_crc_atomic_release_notifier_contexts(state);
drm_atomic_helper_commit_hw_done(state);
drm_atomic_helper_cleanup_planes(dev, state);
drm_atomic_helper_commit_cleanup_done(state);
@ -2287,12 +2298,28 @@ static int
nv50_disp_atomic_check(struct drm_device *dev, struct drm_atomic_state *state)
{
struct nv50_atom *atom = nv50_atom(state);
struct nv50_core *core = nv50_disp(dev)->core;
struct drm_connector_state *old_connector_state, *new_connector_state;
struct drm_connector *connector;
struct drm_crtc_state *new_crtc_state;
struct drm_crtc *crtc;
struct nv50_head *head;
struct nv50_head_atom *asyh;
int ret, i;
if (core->assign_windows && core->func->head->static_wndw_map) {
drm_for_each_crtc(crtc, dev) {
new_crtc_state = drm_atomic_get_crtc_state(state,
crtc);
if (IS_ERR(new_crtc_state))
return PTR_ERR(new_crtc_state);
head = nv50_head(crtc);
asyh = nv50_head_atom(new_crtc_state);
core->func->head->static_wndw_map(head, asyh);
}
}
/* We need to handle colour management on a per-plane basis. */
for_each_new_crtc_in_state(state, crtc, new_crtc_state, i) {
if (new_crtc_state->color_mgmt_changed) {
@ -2320,6 +2347,8 @@ nv50_disp_atomic_check(struct drm_device *dev, struct drm_atomic_state *state)
if (ret)
return ret;
nv50_crc_atomic_check_outp(atom);
return 0;
}

View File

@ -1,10 +1,12 @@
#ifndef __NV50_KMS_H__
#define __NV50_KMS_H__
#include <linux/workqueue.h>
#include <nvif/mem.h>
#include "nouveau_display.h"
struct nv50_msto;
struct nouveau_encoder;
struct nv50_disp {
struct nvif_disp *disp;
@ -71,11 +73,33 @@ struct nv50_dmac {
struct mutex lock;
};
struct nv50_outp_atom {
struct list_head head;
struct drm_encoder *encoder;
bool flush_disable;
union nv50_outp_atom_mask {
struct {
bool ctrl:1;
};
u8 mask;
} set, clr;
};
int nv50_dmac_create(struct nvif_device *device, struct nvif_object *disp,
const s32 *oclass, u8 head, void *data, u32 size,
u64 syncbuf, struct nv50_dmac *dmac);
void nv50_dmac_destroy(struct nv50_dmac *);
/*
* For normal encoders this just returns the encoder. For active MST encoders,
* this returns the real outp that's driving displays on the topology.
* Inactive MST encoders return NULL, since they would have no real outp to
* return anyway.
*/
struct nouveau_encoder *nv50_real_outp(struct drm_encoder *encoder);
u32 *evo_wait(struct nv50_dmac *, int nr);
void evo_kick(u32 *, struct nv50_dmac *);

View File

@ -0,0 +1,16 @@
/* SPDX-License-Identifier: MIT */
#ifndef __NV50_KMS_HANDLES_H__
#define __NV50_KMS_HANDLES_H__
/*
* Various hard-coded object handles that nouveau uses. These are made-up by
* nouveau developers, not Nvidia. The only significance of the handles chosen
* is that they must all be unique.
*/
#define NV50_DISP_HANDLE_SYNCBUF 0xf0000000
#define NV50_DISP_HANDLE_VRAM 0xf0000001
#define NV50_DISP_HANDLE_WNDW_CTX(kind) (0xfb000000 | kind)
#define NV50_DISP_HANDLE_CRC_CTX(head, i) (0xfc000000 | head->base.index << 1 | i)
#endif /* !__NV50_KMS_HANDLES_H__ */

View File

@ -24,13 +24,17 @@
#include "core.h"
#include "curs.h"
#include "ovly.h"
#include "crc.h"
#include <nvif/class.h>
#include <nvif/event.h>
#include <nvif/cl0046.h>
#include <drm/drm_atomic_helper.h>
#include <drm/drm_crtc_helper.h>
#include <drm/drm_vblank.h>
#include "nouveau_connector.h"
void
nv50_head_flush_clr(struct nv50_head *head,
struct nv50_head_atom *asyh, bool flush)
@ -38,6 +42,7 @@ nv50_head_flush_clr(struct nv50_head *head,
union nv50_head_atom_mask clr = {
.mask = asyh->clr.mask & ~(flush ? 0 : asyh->set.mask),
};
if (clr.crc) nv50_crc_atomic_clr(head);
if (clr.olut) head->func->olut_clr(head);
if (clr.core) head->func->core_clr(head);
if (clr.curs) head->func->curs_clr(head);
@ -61,6 +66,7 @@ nv50_head_flush_set(struct nv50_head *head, struct nv50_head_atom *asyh)
if (asyh->set.ovly ) head->func->ovly (head, asyh);
if (asyh->set.dither ) head->func->dither (head, asyh);
if (asyh->set.procamp) head->func->procamp (head, asyh);
if (asyh->set.crc ) nv50_crc_atomic_set (head, asyh);
if (asyh->set.or ) head->func->or (head, asyh);
}
@ -84,18 +90,20 @@ nv50_head_atomic_check_dither(struct nv50_head_atom *armh,
{
u32 mode = 0x00;
if (asyc->dither.mode == DITHERING_MODE_AUTO) {
if (asyh->base.depth > asyh->or.bpc * 3)
mode = DITHERING_MODE_DYNAMIC2X2;
} else {
mode = asyc->dither.mode;
}
if (asyc->dither.mode) {
if (asyc->dither.mode == DITHERING_MODE_AUTO) {
if (asyh->base.depth > asyh->or.bpc * 3)
mode = DITHERING_MODE_DYNAMIC2X2;
} else {
mode = asyc->dither.mode;
}
if (asyc->dither.depth == DITHERING_DEPTH_AUTO) {
if (asyh->or.bpc >= 8)
mode |= DITHERING_DEPTH_8BPC;
} else {
mode |= asyc->dither.depth;
if (asyc->dither.depth == DITHERING_DEPTH_AUTO) {
if (asyh->or.bpc >= 8)
mode |= DITHERING_DEPTH_8BPC;
} else {
mode |= asyc->dither.depth;
}
}
asyh->dither.enable = mode;
@ -311,7 +319,7 @@ nv50_head_atomic_check(struct drm_crtc *crtc, struct drm_crtc_state *state)
struct nouveau_conn_atom *asyc = NULL;
struct drm_connector_state *conns;
struct drm_connector *conn;
int i;
int i, ret;
NV_ATOMIC(drm, "%s atomic_check %d\n", crtc->name, asyh->state.active);
if (asyh->state.active) {
@ -406,6 +414,10 @@ nv50_head_atomic_check(struct drm_crtc *crtc, struct drm_crtc_state *state)
asyh->set.curs = asyh->curs.visible;
}
ret = nv50_crc_atomic_check_head(head, asyh, armh);
if (ret)
return ret;
if (asyh->clr.mask || asyh->set.mask)
nv50_atom(asyh->state.state)->lock_core = true;
return 0;
@ -444,6 +456,7 @@ nv50_head_atomic_duplicate_state(struct drm_crtc *crtc)
asyh->ovly = armh->ovly;
asyh->dither = armh->dither;
asyh->procamp = armh->procamp;
asyh->crc = armh->crc;
asyh->or = armh->or;
asyh->dp = armh->dp;
asyh->clr.mask = 0;
@ -465,10 +478,18 @@ nv50_head_reset(struct drm_crtc *crtc)
__drm_atomic_helper_crtc_reset(crtc, &asyh->state);
}
static int
nv50_head_late_register(struct drm_crtc *crtc)
{
return nv50_head_crc_late_register(nv50_head(crtc));
}
static void
nv50_head_destroy(struct drm_crtc *crtc)
{
struct nv50_head *head = nv50_head(crtc);
nvif_notify_fini(&head->base.vblank);
nv50_lut_fini(&head->olut);
drm_crtc_cleanup(crtc);
kfree(head);
@ -486,8 +507,38 @@ nv50_head_func = {
.enable_vblank = nouveau_display_vblank_enable,
.disable_vblank = nouveau_display_vblank_disable,
.get_vblank_timestamp = drm_crtc_vblank_helper_get_vblank_timestamp,
.late_register = nv50_head_late_register,
};
static const struct drm_crtc_funcs
nvd9_head_func = {
.reset = nv50_head_reset,
.gamma_set = drm_atomic_helper_legacy_gamma_set,
.destroy = nv50_head_destroy,
.set_config = drm_atomic_helper_set_config,
.page_flip = drm_atomic_helper_page_flip,
.atomic_duplicate_state = nv50_head_atomic_duplicate_state,
.atomic_destroy_state = nv50_head_atomic_destroy_state,
.enable_vblank = nouveau_display_vblank_enable,
.disable_vblank = nouveau_display_vblank_disable,
.get_vblank_timestamp = drm_crtc_vblank_helper_get_vblank_timestamp,
.verify_crc_source = nv50_crc_verify_source,
.get_crc_sources = nv50_crc_get_sources,
.set_crc_source = nv50_crc_set_source,
.late_register = nv50_head_late_register,
};
static int nv50_head_vblank_handler(struct nvif_notify *notify)
{
struct nouveau_crtc *nv_crtc =
container_of(notify, struct nouveau_crtc, vblank);
if (drm_crtc_handle_vblank(&nv_crtc->base))
nv50_crc_handle_vblank(nv50_head(&nv_crtc->base));
return NVIF_NOTIFY_KEEP;
}
struct nv50_head *
nv50_head_create(struct drm_device *dev, int index)
{
@ -495,7 +546,9 @@ nv50_head_create(struct drm_device *dev, int index)
struct nv50_disp *disp = nv50_disp(dev);
struct nv50_head *head;
struct nv50_wndw *base, *ovly, *curs;
struct nouveau_crtc *nv_crtc;
struct drm_crtc *crtc;
const struct drm_crtc_funcs *funcs;
int ret;
head = kzalloc(sizeof(*head), GFP_KERNEL);
@ -505,6 +558,11 @@ nv50_head_create(struct drm_device *dev, int index)
head->func = disp->core->func->head;
head->base.index = index;
if (disp->disp->object.oclass < GF110_DISP)
funcs = &nv50_head_func;
else
funcs = &nvd9_head_func;
if (disp->disp->object.oclass < GV100_DISP) {
ret = nv50_base_new(drm, head->base.index, &base);
ret = nv50_ovly_new(drm, head->base.index, &ovly);
@ -521,9 +579,10 @@ nv50_head_create(struct drm_device *dev, int index)
return ERR_PTR(ret);
}
crtc = &head->base.base;
nv_crtc = &head->base;
crtc = &nv_crtc->base;
drm_crtc_init_with_planes(dev, crtc, &base->plane, &curs->plane,
&nv50_head_func, "head-%d", head->base.index);
funcs, "head-%d", head->base.index);
drm_crtc_helper_add(crtc, &nv50_head_help);
/* Keep the legacy gamma size at 256 to avoid compatibility issues */
drm_mode_crtc_set_gamma_size(crtc, 256);
@ -539,5 +598,16 @@ nv50_head_create(struct drm_device *dev, int index)
}
}
ret = nvif_notify_init(&disp->disp->object, nv50_head_vblank_handler,
false, NV04_DISP_NTFY_VBLANK,
&(struct nvif_notify_head_req_v0) {
.head = nv_crtc->index,
},
sizeof(struct nvif_notify_head_req_v0),
sizeof(struct nvif_notify_head_rep_v0),
&nv_crtc->vblank);
if (ret)
return ERR_PTR(ret);
return head;
}

View File

@ -1,22 +1,28 @@
#ifndef __NV50_KMS_HEAD_H__
#define __NV50_KMS_HEAD_H__
#define nv50_head(c) container_of((c), struct nv50_head, base.base)
#include <linux/workqueue.h>
#include "disp.h"
#include "atom.h"
#include "crc.h"
#include "lut.h"
#include "nouveau_crtc.h"
#include "nouveau_encoder.h"
struct nv50_head {
const struct nv50_head_func *func;
struct nouveau_crtc base;
struct nv50_crc crc;
struct nv50_lut olut;
struct nv50_msto *msto;
};
struct nv50_head *nv50_head_create(struct drm_device *, int index);
void nv50_head_flush_set(struct nv50_head *, struct nv50_head_atom *);
void nv50_head_flush_clr(struct nv50_head *, struct nv50_head_atom *, bool y);
void nv50_head_flush_set(struct nv50_head *head, struct nv50_head_atom *asyh);
void nv50_head_flush_clr(struct nv50_head *head,
struct nv50_head_atom *asyh, bool flush);
struct nv50_head_func {
void (*view)(struct nv50_head *, struct nv50_head_atom *);
@ -40,6 +46,7 @@ struct nv50_head_func {
void (*dither)(struct nv50_head *, struct nv50_head_atom *);
void (*procamp)(struct nv50_head *, struct nv50_head_atom *);
void (*or)(struct nv50_head *, struct nv50_head_atom *);
void (*static_wndw_map)(struct nv50_head *, struct nv50_head_atom *);
};
extern const struct nv50_head_func head507d;
@ -86,6 +93,7 @@ int headc37d_curs_format(struct nv50_head *, struct nv50_wndw_atom *,
void headc37d_curs_set(struct nv50_head *, struct nv50_head_atom *);
void headc37d_curs_clr(struct nv50_head *);
void headc37d_dither(struct nv50_head *, struct nv50_head_atom *);
void headc37d_static_wndw_map(struct nv50_head *, struct nv50_head_atom *);
extern const struct nv50_head_func headc57d;
#endif

View File

@ -19,8 +19,15 @@
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*/
#include <drm/drm_connector.h>
#include <drm/drm_mode_config.h>
#include <drm/drm_vblank.h>
#include "nouveau_drv.h"
#include "nouveau_bios.h"
#include "nouveau_connector.h"
#include "head.h"
#include "core.h"
#include "crc.h"
void
head907d_or(struct nv50_head *head, struct nv50_head_atom *asyh)
@ -29,9 +36,10 @@ head907d_or(struct nv50_head *head, struct nv50_head_atom *asyh)
u32 *push;
if ((push = evo_wait(core, 3))) {
evo_mthd(push, 0x0404 + (head->base.index * 0x300), 2);
evo_data(push, 0x00000001 | asyh->or.depth << 6 |
asyh->or.nvsync << 4 |
asyh->or.nhsync << 3);
evo_data(push, asyh->or.depth << 6 |
asyh->or.nvsync << 4 |
asyh->or.nhsync << 3 |
asyh->or.crc_raster);
evo_data(push, 0x31ec6000 | head->base.index << 25 |
asyh->mode.interlace);
evo_kick(push, core);

View File

@ -27,26 +27,29 @@ static void
headc37d_or(struct nv50_head *head, struct nv50_head_atom *asyh)
{
struct nv50_dmac *core = &nv50_disp(head->base.base.dev)->core->chan;
u8 depth;
u32 *push;
if ((push = evo_wait(core, 2))) {
/*XXX: This is a dirty hack until OR depth handling is
* improved later for deep colour etc.
*/
switch (asyh->or.depth) {
case 6: asyh->or.depth = 5; break;
case 5: asyh->or.depth = 4; break;
case 2: asyh->or.depth = 1; break;
case 0: asyh->or.depth = 4; break;
case 6: depth = 5; break;
case 5: depth = 4; break;
case 2: depth = 1; break;
case 0: depth = 4; break;
default:
depth = asyh->or.depth;
WARN_ON(1);
break;
}
evo_mthd(push, 0x2004 + (head->base.index * 0x400), 1);
evo_data(push, 0x00000001 |
asyh->or.depth << 4 |
evo_data(push, depth << 4 |
asyh->or.nvsync << 3 |
asyh->or.nhsync << 2);
asyh->or.nhsync << 2 |
asyh->or.crc_raster);
evo_kick(push, core);
}
}
@ -201,6 +204,15 @@ headc37d_view(struct nv50_head *head, struct nv50_head_atom *asyh)
}
}
void
headc37d_static_wndw_map(struct nv50_head *head, struct nv50_head_atom *asyh)
{
int i, end;
for (i = head->base.index * 2, end = i + 2; i < end; i++)
asyh->wndw.owned |= BIT(i);
}
const struct nv50_head_func
headc37d = {
.view = headc37d_view,
@ -216,4 +228,5 @@ headc37d = {
.dither = headc37d_dither,
.procamp = headc37d_procamp,
.or = headc37d_or,
.static_wndw_map = headc37d_static_wndw_map,
};

View File

@ -27,26 +27,30 @@ static void
headc57d_or(struct nv50_head *head, struct nv50_head_atom *asyh)
{
struct nv50_dmac *core = &nv50_disp(head->base.base.dev)->core->chan;
u8 depth;
u32 *push;
if ((push = evo_wait(core, 2))) {
/*XXX: This is a dirty hack until OR depth handling is
* improved later for deep colour etc.
*/
switch (asyh->or.depth) {
case 6: asyh->or.depth = 5; break;
case 5: asyh->or.depth = 4; break;
case 2: asyh->or.depth = 1; break;
case 0: asyh->or.depth = 4; break;
case 6: depth = 5; break;
case 5: depth = 4; break;
case 2: depth = 1; break;
case 0: depth = 4; break;
default:
depth = asyh->or.depth;
WARN_ON(1);
break;
}
evo_mthd(push, 0x2004 + (head->base.index * 0x400), 1);
evo_data(push, 0xfc000001 |
asyh->or.depth << 4 |
evo_data(push, 0xfc000000 |
depth << 4 |
asyh->or.nvsync << 3 |
asyh->or.nhsync << 2);
asyh->or.nhsync << 2 |
asyh->or.crc_raster);
evo_kick(push, core);
}
}
@ -208,4 +212,6 @@ headc57d = {
.dither = headc37d_dither,
.procamp = headc57d_procamp,
.or = headc57d_or,
/* TODO: flexible window mappings */
.static_wndw_map = headc37d_static_wndw_map,
};

View File

@ -21,6 +21,7 @@
*/
#include "wndw.h"
#include "wimm.h"
#include "handles.h"
#include <nvif/class.h>
#include <nvif/cl0002.h>
@ -59,7 +60,7 @@ nv50_wndw_ctxdma_new(struct nv50_wndw *wndw, struct drm_framebuffer *fb)
int ret;
nouveau_framebuffer_get_layout(fb, &unused, &kind);
handle = 0xfb000000 | kind;
handle = NV50_DISP_HANDLE_WNDW_CTX(kind);
list_for_each_entry(ctxdma, &wndw->ctxdma.list, head) {
if (ctxdma->object.handle == handle)

View File

@ -655,13 +655,12 @@ nouveau_bo_init_mem_type(struct ttm_bo_device *bdev, uint32_t type,
switch (type) {
case TTM_PL_SYSTEM:
man->flags = TTM_MEMTYPE_FLAG_MAPPABLE;
man->flags = 0;
man->available_caching = TTM_PL_MASK_CACHING;
man->default_caching = TTM_PL_FLAG_CACHED;
break;
case TTM_PL_VRAM:
man->flags = TTM_MEMTYPE_FLAG_FIXED |
TTM_MEMTYPE_FLAG_MAPPABLE;
man->flags = TTM_MEMTYPE_FLAG_FIXED;
man->available_caching = TTM_PL_FLAG_UNCACHED |
TTM_PL_FLAG_WC;
man->default_caching = TTM_PL_FLAG_WC;
@ -675,7 +674,6 @@ nouveau_bo_init_mem_type(struct ttm_bo_device *bdev, uint32_t type,
}
man->func = &nouveau_vram_manager;
man->io_reserve_fastpath = false;
man->use_io_reserve_lru = true;
} else {
man->func = &ttm_bo_manager_func;
@ -691,13 +689,12 @@ nouveau_bo_init_mem_type(struct ttm_bo_device *bdev, uint32_t type,
man->func = &ttm_bo_manager_func;
if (drm->agp.bridge) {
man->flags = TTM_MEMTYPE_FLAG_MAPPABLE;
man->flags = 0;
man->available_caching = TTM_PL_FLAG_UNCACHED |
TTM_PL_FLAG_WC;
man->default_caching = TTM_PL_FLAG_WC;
} else {
man->flags = TTM_MEMTYPE_FLAG_MAPPABLE |
TTM_MEMTYPE_FLAG_CMA;
man->flags = 0;
man->available_caching = TTM_PL_MASK_CACHING;
man->default_caching = TTM_PL_FLAG_CACHED;
}
@ -1439,7 +1436,6 @@ nouveau_bo_verify_access(struct ttm_buffer_object *bo, struct file *filp)
static int
nouveau_ttm_io_mem_reserve(struct ttm_bo_device *bdev, struct ttm_mem_reg *reg)
{
struct ttm_mem_type_manager *man = &bdev->man[reg->mem_type];
struct nouveau_drm *drm = nouveau_bdev(bdev);
struct nvkm_device *device = nvxx_device(&drm->client.device);
struct nouveau_mem *mem = nouveau_mem(reg);
@ -1449,8 +1445,7 @@ nouveau_ttm_io_mem_reserve(struct ttm_bo_device *bdev, struct ttm_mem_reg *reg)
reg->bus.size = reg->num_pages << PAGE_SHIFT;
reg->bus.base = 0;
reg->bus.is_iomem = false;
if (!(man->flags & TTM_MEMTYPE_FLAG_MAPPABLE))
return -EINVAL;
switch (reg->mem_type) {
case TTM_PL_SYSTEM:
/* System memory */
@ -1505,8 +1500,6 @@ nouveau_ttm_io_mem_reserve(struct ttm_bo_device *bdev, struct ttm_mem_reg *reg)
if (ret != 1) {
if (WARN_ON(ret == 0))
return -EINVAL;
if (ret == -ENOSPC)
return -EAGAIN;
return ret;
}

View File

@ -44,15 +44,7 @@
#include <nvif/class.h>
#include <nvif/cl0046.h>
#include <nvif/event.h>
static int
nouveau_display_vblank_handler(struct nvif_notify *notify)
{
struct nouveau_crtc *nv_crtc =
container_of(notify, typeof(*nv_crtc), vblank);
drm_crtc_handle_vblank(&nv_crtc->base);
return NVIF_NOTIFY_KEEP;
}
#include <dispnv50/crc.h>
int
nouveau_display_vblank_enable(struct drm_crtc *crtc)
@ -136,50 +128,6 @@ nouveau_display_scanoutpos(struct drm_crtc *crtc,
stime, etime);
}
static void
nouveau_display_vblank_fini(struct drm_device *dev)
{
struct drm_crtc *crtc;
list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
struct nouveau_crtc *nv_crtc = nouveau_crtc(crtc);
nvif_notify_fini(&nv_crtc->vblank);
}
}
static int
nouveau_display_vblank_init(struct drm_device *dev)
{
struct nouveau_display *disp = nouveau_display(dev);
struct drm_crtc *crtc;
int ret;
list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
struct nouveau_crtc *nv_crtc = nouveau_crtc(crtc);
ret = nvif_notify_init(&disp->disp.object,
nouveau_display_vblank_handler, false,
NV04_DISP_NTFY_VBLANK,
&(struct nvif_notify_head_req_v0) {
.head = nv_crtc->index,
},
sizeof(struct nvif_notify_head_req_v0),
sizeof(struct nvif_notify_head_rep_v0),
&nv_crtc->vblank);
if (ret) {
nouveau_display_vblank_fini(dev);
return ret;
}
}
ret = drm_vblank_init(dev, dev->mode_config.num_crtc);
if (ret) {
nouveau_display_vblank_fini(dev);
return ret;
}
return 0;
}
static const struct drm_framebuffer_funcs nouveau_framebuffer_funcs = {
.destroy = drm_gem_fb_destroy,
.create_handle = drm_gem_fb_create_handle,
@ -705,9 +653,12 @@ nouveau_display_create(struct drm_device *dev)
drm_mode_config_reset(dev);
if (dev->mode_config.num_crtc) {
ret = nouveau_display_vblank_init(dev);
ret = drm_vblank_init(dev, dev->mode_config.num_crtc);
if (ret)
goto vblank_err;
if (disp->disp.object.oclass >= NV50_DISP)
nv50_crc_init(dev);
}
INIT_WORK(&drm->hpd_work, nouveau_display_hpd_work);
@ -734,7 +685,6 @@ nouveau_display_destroy(struct drm_device *dev)
#ifdef CONFIG_ACPI
unregister_acpi_notifier(&nouveau_drm(dev)->acpi_nb);
#endif
nouveau_display_vblank_fini(dev);
drm_kms_helper_poll_fini(dev);
drm_mode_config_cleanup(dev);

View File

@ -1753,22 +1753,36 @@ static const struct panel_desc foxlink_fl500wvr00_a0t = {
.bus_format = MEDIA_BUS_FMT_RGB888_1X24,
};
static const struct drm_display_mode frida_frd350h54004_mode = {
.clock = 6000,
.hdisplay = 320,
.hsync_start = 320 + 44,
.hsync_end = 320 + 44 + 16,
.htotal = 320 + 44 + 16 + 20,
.vdisplay = 240,
.vsync_start = 240 + 2,
.vsync_end = 240 + 2 + 6,
.vtotal = 240 + 2 + 6 + 2,
.flags = DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC,
static const struct drm_display_mode frida_frd350h54004_modes[] = {
{ /* 60 Hz */
.clock = 6000,
.hdisplay = 320,
.hsync_start = 320 + 44,
.hsync_end = 320 + 44 + 16,
.htotal = 320 + 44 + 16 + 20,
.vdisplay = 240,
.vsync_start = 240 + 2,
.vsync_end = 240 + 2 + 6,
.vtotal = 240 + 2 + 6 + 2,
.flags = DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC,
},
{ /* 50 Hz */
.clock = 5400,
.hdisplay = 320,
.hsync_start = 320 + 56,
.hsync_end = 320 + 56 + 16,
.htotal = 320 + 56 + 16 + 40,
.vdisplay = 240,
.vsync_start = 240 + 2,
.vsync_end = 240 + 2 + 6,
.vtotal = 240 + 2 + 6 + 2,
.flags = DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC,
},
};
static const struct panel_desc frida_frd350h54004 = {
.modes = &frida_frd350h54004_mode,
.num_modes = 1,
.modes = frida_frd350h54004_modes,
.num_modes = ARRAY_SIZE(frida_frd350h54004_modes),
.bpc = 8,
.size = {
.width = 77,

View File

@ -54,7 +54,7 @@ static int qxl_init_mem_type(struct ttm_bo_device *bdev, uint32_t type,
switch (type) {
case TTM_PL_SYSTEM:
/* System memory */
man->flags = TTM_MEMTYPE_FLAG_MAPPABLE;
man->flags = 0;
man->available_caching = TTM_PL_MASK_CACHING;
man->default_caching = TTM_PL_FLAG_CACHED;
break;
@ -62,8 +62,7 @@ static int qxl_init_mem_type(struct ttm_bo_device *bdev, uint32_t type,
case TTM_PL_PRIV:
/* "On-card" video ram */
man->func = &ttm_bo_manager_func;
man->flags = TTM_MEMTYPE_FLAG_FIXED |
TTM_MEMTYPE_FLAG_MAPPABLE;
man->flags = TTM_MEMTYPE_FLAG_FIXED;
man->available_caching = TTM_PL_MASK_CACHING;
man->default_caching = TTM_PL_FLAG_CACHED;
break;
@ -99,7 +98,6 @@ static void qxl_evict_flags(struct ttm_buffer_object *bo,
int qxl_ttm_io_mem_reserve(struct ttm_bo_device *bdev,
struct ttm_mem_reg *mem)
{
struct ttm_mem_type_manager *man = &bdev->man[mem->mem_type];
struct qxl_device *qdev = qxl_get_qdev(bdev);
mem->bus.addr = NULL;
@ -107,8 +105,7 @@ int qxl_ttm_io_mem_reserve(struct ttm_bo_device *bdev,
mem->bus.size = mem->num_pages << PAGE_SHIFT;
mem->bus.base = 0;
mem->bus.is_iomem = false;
if (!(man->flags & TTM_MEMTYPE_FLAG_MAPPABLE))
return -EINVAL;
switch (mem->mem_type) {
case TTM_PL_SYSTEM:
/* system memory */
@ -129,11 +126,6 @@ int qxl_ttm_io_mem_reserve(struct ttm_bo_device *bdev,
return 0;
}
static void qxl_ttm_io_mem_free(struct ttm_bo_device *bdev,
struct ttm_mem_reg *mem)
{
}
/*
* TTM backend functions.
*/
@ -247,7 +239,6 @@ static struct ttm_bo_driver qxl_bo_driver = {
.evict_flags = &qxl_evict_flags,
.move = &qxl_bo_move,
.io_mem_reserve = &qxl_ttm_io_mem_reserve,
.io_mem_free = &qxl_ttm_io_mem_free,
.move_notify = &qxl_bo_move_notify,
};

View File

@ -84,7 +84,7 @@ static int radeon_init_mem_type(struct ttm_bo_device *bdev, uint32_t type,
man->func = &ttm_bo_manager_func;
man->available_caching = TTM_PL_MASK_CACHING;
man->default_caching = TTM_PL_FLAG_CACHED;
man->flags = TTM_MEMTYPE_FLAG_MAPPABLE | TTM_MEMTYPE_FLAG_CMA;
man->flags = TTM_MEMTYPE_FLAG_MAPPABLE;
#if IS_ENABLED(CONFIG_AGP)
if (rdev->flags & RADEON_IS_AGP) {
if (!rdev->ddev->agp) {
@ -457,10 +457,6 @@ static int radeon_ttm_io_mem_reserve(struct ttm_bo_device *bdev, struct ttm_mem_
return 0;
}
static void radeon_ttm_io_mem_free(struct ttm_bo_device *bdev, struct ttm_mem_reg *mem)
{
}
/*
* TTM backend functions.
*/
@ -774,7 +770,6 @@ static struct ttm_bo_driver radeon_bo_driver = {
.move_notify = &radeon_bo_move_notify,
.fault_reserve_notify = &radeon_bo_fault_reserve_notify,
.io_mem_reserve = &radeon_ttm_io_mem_reserve,
.io_mem_free = &radeon_ttm_io_mem_free,
};
int radeon_ttm_init(struct radeon_device *rdev)

View File

@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2018 Texas Instruments Incorporated - http://www.ti.com/
* Copyright (C) 2018 Texas Instruments Incorporated - https://www.ti.com/
* Author: Tomi Valkeinen <tomi.valkeinen@ti.com>
*/

View File

@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2018 Texas Instruments Incorporated - http://www.ti.com/
* Copyright (C) 2018 Texas Instruments Incorporated - https://www.ti.com/
* Author: Tomi Valkeinen <tomi.valkeinen@ti.com>
*/

View File

@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2016-2018 Texas Instruments Incorporated - http://www.ti.com/
* Copyright (C) 2016-2018 Texas Instruments Incorporated - https://www.ti.com/
* Author: Jyri Sarha <jsarha@ti.com>
*/

View File

@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2018 Texas Instruments Incorporated - http://www.ti.com/
* Copyright (C) 2018 Texas Instruments Incorporated - https://www.ti.com/
* Author: Tomi Valkeinen <tomi.valkeinen@ti.com>
*/

View File

@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2016-2018 Texas Instruments Incorporated - http://www.ti.com/
* Copyright (C) 2016-2018 Texas Instruments Incorporated - https://www.ti.com/
* Author: Jyri Sarha <jsarha@ti.com>
*/

View File

@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2018 Texas Instruments Incorporated - http://www.ti.com/
* Copyright (C) 2018 Texas Instruments Incorporated - https://www.ti.com/
* Author: Tomi Valkeinen <tomi.valkeinen@ti.com>
*/

View File

@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2018 Texas Instruments Incorporated - http://www.ti.com/
* Copyright (C) 2018 Texas Instruments Incorporated - https://www.ti.com/
* Author: Tomi Valkeinen <tomi.valkeinen@ti.com>
*/

View File

@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2018 Texas Instruments Incorporated - http://www.ti.com/
* Copyright (C) 2018 Texas Instruments Incorporated - https://www.ti.com/
* Author: Tomi Valkeinen <tomi.valkeinen@ti.com>
*/

View File

@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2018 Texas Instruments Incorporated - http://www.ti.com/
* Copyright (C) 2018 Texas Instruments Incorporated - https://www.ti.com/
* Author: Tomi Valkeinen <tomi.valkeinen@ti.com>
*/

View File

@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2018 Texas Instruments Incorporated - http://www.ti.com/
* Copyright (C) 2018 Texas Instruments Incorporated - https://www.ti.com/
* Author: Tomi Valkeinen <tomi.valkeinen@ti.com>
*/

View File

@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2018 Texas Instruments Incorporated - http://www.ti.com/
* Copyright (C) 2018 Texas Instruments Incorporated - https://www.ti.com/
* Author: Tomi Valkeinen <tomi.valkeinen@ti.com>
*/

View File

@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2018 Texas Instruments Incorporated - http://www.ti.com/
* Copyright (C) 2018 Texas Instruments Incorporated - https://www.ti.com/
* Author: Tomi Valkeinen <tomi.valkeinen@ti.com>
*/

View File

@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2018 Texas Instruments Incorporated - http://www.ti.com/
* Copyright (C) 2018 Texas Instruments Incorporated - https://www.ti.com/
* Author: Tomi Valkeinen <tomi.valkeinen@ti.com>
*/

View File

@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2018 Texas Instruments Incorporated - http://www.ti.com/
* Copyright (C) 2018 Texas Instruments Incorporated - https://www.ti.com/
* Author: Tomi Valkeinen <tomi.valkeinen@ti.com>
*/

View File

@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2018 Texas Instruments Incorporated - http://www.ti.com/
* Copyright (C) 2018 Texas Instruments Incorporated - https://www.ti.com/
* Author: Tomi Valkeinen <tomi.valkeinen@ti.com>
*/

View File

@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2018 Texas Instruments Incorporated - http://www.ti.com/
* Copyright (C) 2018 Texas Instruments Incorporated - https://www.ti.com/
* Author: Jyri Sarha <jsarha@ti.com>
*/

View File

@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2018 Texas Instruments Incorporated - http://www.ti.com/
* Copyright (C) 2018 Texas Instruments Incorporated - https://www.ti.com/
* Author: Jyri Sarha <jsarha@ti.com>
*/

View File

@ -272,20 +272,15 @@ static int ttm_bo_handle_move_mem(struct ttm_buffer_object *bo,
struct ttm_operation_ctx *ctx)
{
struct ttm_bo_device *bdev = bo->bdev;
bool old_is_pci = ttm_mem_reg_is_pci(bdev, &bo->mem);
bool new_is_pci = ttm_mem_reg_is_pci(bdev, mem);
struct ttm_mem_type_manager *old_man = &bdev->man[bo->mem.mem_type];
struct ttm_mem_type_manager *new_man = &bdev->man[mem->mem_type];
int ret = 0;
int ret;
if (old_is_pci || new_is_pci ||
((mem->placement & bo->mem.placement & TTM_PL_MASK_CACHING) == 0)) {
ret = ttm_mem_io_lock(old_man, true);
if (unlikely(ret != 0))
goto out_err;
ttm_bo_unmap_virtual_locked(bo);
ttm_mem_io_unlock(old_man);
}
ret = ttm_mem_io_lock(old_man, true);
if (unlikely(ret != 0))
goto out_err;
ttm_bo_unmap_virtual_locked(bo);
ttm_mem_io_unlock(old_man);
/*
* Create and bind a ttm if required.
@ -1521,7 +1516,6 @@ int ttm_bo_init_mm(struct ttm_bo_device *bdev, unsigned type,
BUG_ON(type >= TTM_NUM_MEM_TYPES);
man = &bdev->man[type];
BUG_ON(man->has_type);
man->io_reserve_fastpath = true;
man->use_io_reserve_lru = false;
mutex_init(&man->io_reserve_mutex);
spin_lock_init(&man->move_lock);
@ -1699,23 +1693,6 @@ EXPORT_SYMBOL(ttm_bo_device_init);
* buffer object vm functions.
*/
bool ttm_mem_reg_is_pci(struct ttm_bo_device *bdev, struct ttm_mem_reg *mem)
{
struct ttm_mem_type_manager *man = &bdev->man[mem->mem_type];
if (!(man->flags & TTM_MEMTYPE_FLAG_FIXED)) {
if (mem->mem_type == TTM_PL_SYSTEM)
return false;
if (man->flags & TTM_MEMTYPE_FLAG_CMA)
return false;
if (mem->placement & TTM_PL_FLAG_CACHED)
return false;
}
return true;
}
void ttm_bo_unmap_virtual_locked(struct ttm_buffer_object *bo)
{
struct ttm_bo_device *bdev = bo->bdev;

View File

@ -93,7 +93,7 @@ EXPORT_SYMBOL(ttm_bo_move_ttm);
int ttm_mem_io_lock(struct ttm_mem_type_manager *man, bool interruptible)
{
if (likely(man->io_reserve_fastpath))
if (likely(!man->use_io_reserve_lru))
return 0;
if (interruptible)
@ -105,7 +105,7 @@ int ttm_mem_io_lock(struct ttm_mem_type_manager *man, bool interruptible)
void ttm_mem_io_unlock(struct ttm_mem_type_manager *man)
{
if (likely(man->io_reserve_fastpath))
if (likely(!man->use_io_reserve_lru))
return;
mutex_unlock(&man->io_reserve_mutex);
@ -115,39 +115,35 @@ static int ttm_mem_io_evict(struct ttm_mem_type_manager *man)
{
struct ttm_buffer_object *bo;
if (!man->use_io_reserve_lru || list_empty(&man->io_reserve_lru))
return -EAGAIN;
bo = list_first_entry_or_null(&man->io_reserve_lru,
struct ttm_buffer_object,
io_reserve_lru);
if (!bo)
return -ENOSPC;
bo = list_first_entry(&man->io_reserve_lru,
struct ttm_buffer_object,
io_reserve_lru);
list_del_init(&bo->io_reserve_lru);
ttm_bo_unmap_virtual_locked(bo);
return 0;
}
int ttm_mem_io_reserve(struct ttm_bo_device *bdev,
struct ttm_mem_reg *mem)
{
struct ttm_mem_type_manager *man = &bdev->man[mem->mem_type];
int ret = 0;
int ret;
if (mem->bus.io_reserved_count++)
return 0;
if (!bdev->driver->io_mem_reserve)
return 0;
if (likely(man->io_reserve_fastpath))
return bdev->driver->io_mem_reserve(bdev, mem);
if (bdev->driver->io_mem_reserve &&
mem->bus.io_reserved_count++ == 0) {
retry:
ret = bdev->driver->io_mem_reserve(bdev, mem);
if (ret == -EAGAIN) {
ret = ttm_mem_io_evict(man);
if (ret == 0)
goto retry;
}
ret = bdev->driver->io_mem_reserve(bdev, mem);
if (ret == -ENOSPC) {
ret = ttm_mem_io_evict(man);
if (ret == 0)
goto retry;
}
return ret;
}
@ -155,35 +151,31 @@ int ttm_mem_io_reserve(struct ttm_bo_device *bdev,
void ttm_mem_io_free(struct ttm_bo_device *bdev,
struct ttm_mem_reg *mem)
{
struct ttm_mem_type_manager *man = &bdev->man[mem->mem_type];
if (likely(man->io_reserve_fastpath))
if (--mem->bus.io_reserved_count)
return;
if (bdev->driver->io_mem_reserve &&
--mem->bus.io_reserved_count == 0 &&
bdev->driver->io_mem_free)
bdev->driver->io_mem_free(bdev, mem);
if (!bdev->driver->io_mem_free)
return;
bdev->driver->io_mem_free(bdev, mem);
}
int ttm_mem_io_reserve_vm(struct ttm_buffer_object *bo)
{
struct ttm_mem_type_manager *man = &bo->bdev->man[bo->mem.mem_type];
struct ttm_mem_reg *mem = &bo->mem;
int ret;
if (!mem->bus.io_reserved_vm) {
struct ttm_mem_type_manager *man =
&bo->bdev->man[mem->mem_type];
if (mem->bus.io_reserved_vm)
return 0;
ret = ttm_mem_io_reserve(bo->bdev, mem);
if (unlikely(ret != 0))
return ret;
mem->bus.io_reserved_vm = true;
if (man->use_io_reserve_lru)
list_add_tail(&bo->io_reserve_lru,
&man->io_reserve_lru);
}
ret = ttm_mem_io_reserve(bo->bdev, mem);
if (unlikely(ret != 0))
return ret;
mem->bus.io_reserved_vm = true;
if (man->use_io_reserve_lru)
list_add_tail(&bo->io_reserve_lru,
&man->io_reserve_lru);
return 0;
}
@ -191,15 +183,17 @@ void ttm_mem_io_free_vm(struct ttm_buffer_object *bo)
{
struct ttm_mem_reg *mem = &bo->mem;
if (mem->bus.io_reserved_vm) {
mem->bus.io_reserved_vm = false;
list_del_init(&bo->io_reserve_lru);
ttm_mem_io_free(bo->bdev, mem);
}
if (!mem->bus.io_reserved_vm)
return;
mem->bus.io_reserved_vm = false;
list_del_init(&bo->io_reserve_lru);
ttm_mem_io_free(bo->bdev, mem);
}
static int ttm_mem_reg_ioremap(struct ttm_bo_device *bdev, struct ttm_mem_reg *mem,
void **virtual)
static int ttm_mem_reg_ioremap(struct ttm_bo_device *bdev,
struct ttm_mem_reg *mem,
void **virtual)
{
struct ttm_mem_type_manager *man = &bdev->man[mem->mem_type];
int ret;
@ -216,9 +210,11 @@ static int ttm_mem_reg_ioremap(struct ttm_bo_device *bdev, struct ttm_mem_reg *m
addr = mem->bus.addr;
} else {
if (mem->placement & TTM_PL_FLAG_WC)
addr = ioremap_wc(mem->bus.base + mem->bus.offset, mem->bus.size);
addr = ioremap_wc(mem->bus.base + mem->bus.offset,
mem->bus.size);
else
addr = ioremap(mem->bus.base + mem->bus.offset, mem->bus.size);
addr = ioremap(mem->bus.base + mem->bus.offset,
mem->bus.size);
if (!addr) {
(void) ttm_mem_io_lock(man, false);
ttm_mem_io_free(bdev, mem);
@ -230,8 +226,9 @@ static int ttm_mem_reg_ioremap(struct ttm_bo_device *bdev, struct ttm_mem_reg *m
return 0;
}
static void ttm_mem_reg_iounmap(struct ttm_bo_device *bdev, struct ttm_mem_reg *mem,
void *virtual)
static void ttm_mem_reg_iounmap(struct ttm_bo_device *bdev,
struct ttm_mem_reg *mem,
void *virtual)
{
struct ttm_mem_type_manager *man;
@ -513,11 +510,13 @@ static int ttm_bo_ioremap(struct ttm_buffer_object *bo,
} else {
map->bo_kmap_type = ttm_bo_map_iomap;
if (mem->placement & TTM_PL_FLAG_WC)
map->virtual = ioremap_wc(bo->mem.bus.base + bo->mem.bus.offset + offset,
map->virtual = ioremap_wc(bo->mem.bus.base +
bo->mem.bus.offset + offset,
size);
else
map->virtual = ioremap(bo->mem.bus.base + bo->mem.bus.offset + offset,
size);
map->virtual = ioremap(bo->mem.bus.base +
bo->mem.bus.offset + offset,
size);
}
return (!map->virtual) ? -ENOMEM : 0;
}

View File

@ -58,7 +58,7 @@ struct hgsmi_buffer_tail {
/* Reserved, must be initialized to 0. */
u32 reserved;
/*
* One-at-a-Time Hash: http://www.burtleburtle.net/bob/hash/doobs.html
* One-at-a-Time Hash: https://www.burtleburtle.net/bob/hash/doobs.html
* Over the header, offset and for first 4 bytes of the tail.
*/
u32 checksum;

View File

@ -8,7 +8,7 @@
#include "vboxvideo_vbe.h"
#include "hgsmi_defs.h"
/* One-at-a-Time Hash from http://www.burtleburtle.net/bob/hash/doobs.html */
/* One-at-a-Time Hash from https://www.burtleburtle.net/bob/hash/doobs.html */
static u32 hgsmi_hash_process(u32 hash, const u8 *data, int size)
{
while (size--) {

View File

@ -742,15 +742,13 @@ static int vmw_init_mem_type(struct ttm_bo_device *bdev, uint32_t type,
switch (type) {
case TTM_PL_SYSTEM:
/* System memory */
man->flags = TTM_MEMTYPE_FLAG_MAPPABLE;
man->available_caching = TTM_PL_FLAG_CACHED;
man->default_caching = TTM_PL_FLAG_CACHED;
break;
case TTM_PL_VRAM:
/* "On-card" video ram */
man->func = &vmw_thp_func;
man->flags = TTM_MEMTYPE_FLAG_FIXED | TTM_MEMTYPE_FLAG_MAPPABLE;
man->flags = TTM_MEMTYPE_FLAG_FIXED;
man->available_caching = TTM_PL_FLAG_CACHED;
man->default_caching = TTM_PL_FLAG_CACHED;
break;
@ -762,7 +760,6 @@ static int vmw_init_mem_type(struct ttm_bo_device *bdev, uint32_t type,
* slots as well as the bo size.
*/
man->func = &vmw_gmrid_manager_func;
man->flags = TTM_MEMTYPE_FLAG_CMA | TTM_MEMTYPE_FLAG_MAPPABLE;
man->available_caching = TTM_PL_FLAG_CACHED;
man->default_caching = TTM_PL_FLAG_CACHED;
break;
@ -789,7 +786,6 @@ static int vmw_verify_access(struct ttm_buffer_object *bo, struct file *filp)
static int vmw_ttm_io_mem_reserve(struct ttm_bo_device *bdev, struct ttm_mem_reg *mem)
{
struct ttm_mem_type_manager *man = &bdev->man[mem->mem_type];
struct vmw_private *dev_priv = container_of(bdev, struct vmw_private, bdev);
mem->bus.addr = NULL;
@ -797,8 +793,7 @@ static int vmw_ttm_io_mem_reserve(struct ttm_bo_device *bdev, struct ttm_mem_reg
mem->bus.offset = 0;
mem->bus.size = mem->num_pages << PAGE_SHIFT;
mem->bus.base = 0;
if (!(man->flags & TTM_MEMTYPE_FLAG_MAPPABLE))
return -EINVAL;
switch (mem->mem_type) {
case TTM_PL_SYSTEM:
case VMW_PL_GMR:
@ -815,15 +810,6 @@ static int vmw_ttm_io_mem_reserve(struct ttm_bo_device *bdev, struct ttm_mem_reg
return 0;
}
static void vmw_ttm_io_mem_free(struct ttm_bo_device *bdev, struct ttm_mem_reg *mem)
{
}
static int vmw_ttm_fault_reserve_notify(struct ttm_buffer_object *bo)
{
return 0;
}
/**
* vmw_move_notify - TTM move_notify_callback
*
@ -866,7 +852,5 @@ struct ttm_bo_driver vmw_bo_driver = {
.verify_access = vmw_verify_access,
.move_notify = vmw_move_notify,
.swap_notify = vmw_swap_notify,
.fault_reserve_notify = &vmw_ttm_fault_reserve_notify,
.io_mem_reserve = &vmw_ttm_io_mem_reserve,
.io_mem_free = &vmw_ttm_io_mem_free,
};

View File

@ -824,7 +824,7 @@ config FB_OPENCORES
systems (e.g. Altera socfpga or Xilinx Zynq) on FPGAs.
The source code and specification for the core is available at
<http://opencores.org/project,vga_lcd>
<https://opencores.org/project,vga_lcd>
config FB_S1D13XXX
tristate "Epson S1D13XXX framebuffer support"
@ -835,7 +835,7 @@ config FB_S1D13XXX
help
Support for S1D13XXX framebuffer device family (currently only
working with S1D13806). Product specs at
<http://vdc.epson.com/>
<https://vdc.epson.com/>
config FB_ATMEL
tristate "AT91 LCD Controller support"
@ -1193,7 +1193,7 @@ config FB_RADEON
don't need to choose this to run the Radeon in plain VGA mode.
There is a product page at
http://products.amd.com/en-us/GraphicCardResult.aspx
https://products.amd.com/en-us/GraphicCardResult.aspx
config FB_RADEON_I2C
bool "DDC/I2C for ATI Radeon support"
@ -1361,7 +1361,7 @@ config FB_SIS
help
This is the frame buffer device driver for the SiS 300, 315, 330
and 340 series as well as XGI V3XT, V5, V8, Z7 graphics chipsets.
Specs available at <http://www.sis.com> and <http://www.xgitech.com>.
Specs available at <https://www.sis.com> and <http://www.xgitech.com>.
To compile this driver as a module, choose M here; the module
will be called sisfb.

View File

@ -19,7 +19,7 @@
* Generalized Timing Formula is derived from:
*
* GTF Spreadsheet by Andy Morrish (1/5/97)
* available at http://www.vesa.org
* available at https://www.vesa.org
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file COPYING in the main directory of this archive
@ -1201,7 +1201,7 @@ static void fb_timings_dclk(struct __fb_timings *timings)
* ignored and @var will be filled with the calculated timings.
*
* All calculations are based on the VESA GTF Spreadsheet
* available at VESA's public ftp (http://www.vesa.org).
* available at VESA's public ftp (https://www.vesa.org).
*
* NOTES:
* The timings generated by the GTF will be different from VESA

View File

@ -430,7 +430,7 @@ static int ep93xxfb_alloc_videomem(struct fb_info *info)
/*
* There is a bug in the ep93xx framebuffer which causes problems
* if bit 27 of the physical address is set.
* See: http://marc.info/?l=linux-arm-kernel&m=110061245502000&w=2
* See: https://marc.info/?l=linux-arm-kernel&m=110061245502000&w=2
* There does not seem to be any official errata for this, but I
* have confirmed the problem exists on my hardware (ep9315) at
* least.

View File

@ -5,7 +5,7 @@
* 2011 (c) Aeroflex Gaisler AB
*
* Full documentation of the core can be found here:
* http://www.gaisler.com/products/grlib/grip.pdf
* https://www.gaisler.com/products/grlib/grip.pdf
*
* Contributors: Kristoffer Glembo <kristoffer@gaisler.com>
*/

View File

@ -478,7 +478,7 @@ static int macfb_setcolreg(unsigned regno, unsigned red, unsigned green,
break;
/*
* 24-bit colour almost doesn't exist on 68k Macs --
* http://support.apple.com/kb/TA28634 (Old Article: 10992)
* https://support.apple.com/kb/TA28634 (Old Article: 10992)
*/
case 24:
case 32:

View File

@ -10,7 +10,7 @@
* Layout is based on skeletonfb.c by James Simmons and Geert Uytterhoeven.
*
* This work was made possible by help and equipment support from E-Ink
* Corporation. http://www.eink.com/
* Corporation. https://www.eink.com/
*
* This driver is written to be used with the Metronome display controller.
* It is intended to be architecture independent. A board specific driver

View File

@ -60,7 +60,7 @@ config FB_OMAP5_DSS_HDMI
select FB_OMAP2_DSS_HDMI_COMMON
help
HDMI Interface for OMAP5 and similar cores. This adds the High
Definition Multimedia Interface. See http://www.hdmi.org/ for HDMI
Definition Multimedia Interface. See https://www.hdmi.org/ for HDMI
specification.
config FB_OMAP2_DSS_SDI
@ -79,7 +79,7 @@ config FB_OMAP2_DSS_DSI
DSI is a high speed half-duplex serial interface between the host
processor and a peripheral, such as a display or a framebuffer chip.
See http://www.mipi.org/ for DSI specifications.
See https://www.mipi.org/ for DSI specifications.
config FB_OMAP2_DSS_MIN_FCK_PER_PCK
int "Minimum FCK/PCK ratio (for scaling)"

View File

@ -2,7 +2,7 @@
/*
* HDMI driver definition for TI OMAP4 Processor.
*
* Copyright (C) 2010-2011 Texas Instruments Incorporated - http://www.ti.com/
* Copyright (C) 2010-2011 Texas Instruments Incorporated - https://www.ti.com/
*/
#ifndef _HDMI_H

View File

@ -1,7 +1,7 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* HDMI interface DSS driver for TI's OMAP4 family of SoCs.
* Copyright (C) 2010-2011 Texas Instruments Incorporated - http://www.ti.com/
* Copyright (C) 2010-2011 Texas Instruments Incorporated - https://www.ti.com/
* Authors: Yong Zhi
* Mythri pk <mythripk@ti.com>
*/

View File

@ -3,7 +3,7 @@
* ti_hdmi_4xxx_ip.c
*
* HDMI TI81xx, TI38xx, TI OMAP4 etc IP driver Library
* Copyright (C) 2010-2011 Texas Instruments Incorporated - http://www.ti.com/
* Copyright (C) 2010-2011 Texas Instruments Incorporated - https://www.ti.com/
* Authors: Yong Zhi
* Mythri pk <mythripk@ti.com>
*/

View File

@ -2,7 +2,7 @@
/*
* HDMI header definition for OMAP4 HDMI core IP
*
* Copyright (C) 2010-2011 Texas Instruments Incorporated - http://www.ti.com/
* Copyright (C) 2010-2011 Texas Instruments Incorporated - https://www.ti.com/
*/
#ifndef _HDMI4_CORE_H_

View File

@ -2,7 +2,7 @@
/*
* HDMI driver definition for TI OMAP5 processors.
*
* Copyright (C) 2011-2012 Texas Instruments Incorporated - http://www.ti.com/
* Copyright (C) 2011-2012 Texas Instruments Incorporated - https://www.ti.com/
*/
#ifndef _HDMI5_CORE_H_

View File

@ -18,7 +18,7 @@
* Clean patches should be sent to the ARM Linux Patch System. Please see the
* following web page for more information:
*
* http://www.arm.linux.org.uk/developer/patches/info.shtml
* https://www.arm.linux.org.uk/developer/patches/info.shtml
*
* Thank you.
*

View File

@ -206,6 +206,9 @@ struct drm_vram_mm *drm_vram_helper_alloc_mm(
struct drm_device *dev, uint64_t vram_base, size_t vram_size);
void drm_vram_helper_release_mm(struct drm_device *dev);
int drmm_vram_helper_init(struct drm_device *dev, uint64_t vram_base,
size_t vram_size);
/*
* Mode-config helpers
*/

View File

@ -27,12 +27,14 @@
#include <linux/seqlock.h>
#include <linux/idr.h>
#include <linux/poll.h>
#include <linux/kthread.h>
#include <drm/drm_file.h>
#include <drm/drm_modes.h>
struct drm_device;
struct drm_crtc;
struct drm_vblank_work;
/**
* struct drm_pending_vblank_event - pending vblank event tracking
@ -203,6 +205,24 @@ struct drm_vblank_crtc {
* disabling functions multiple times.
*/
bool enabled;
/**
* @worker: The &kthread_worker used for executing vblank works.
*/
struct kthread_worker *worker;
/**
* @pending_work: A list of scheduled &drm_vblank_work items that are
* waiting for a future vblank.
*/
struct list_head pending_work;
/**
* @work_wait_queue: The wait queue used for signaling that a
* &drm_vblank_work item has either finished executing, or was
* cancelled.
*/
wait_queue_head_t work_wait_queue;
};
int drm_vblank_init(struct drm_device *dev, unsigned int num_crtcs);

View File

@ -0,0 +1,71 @@
/* SPDX-License-Identifier: MIT */
#ifndef _DRM_VBLANK_WORK_H_
#define _DRM_VBLANK_WORK_H_
#include <linux/kthread.h>
struct drm_crtc;
/**
* struct drm_vblank_work - A delayed work item which delays until a target
* vblank passes, and then executes at realtime priority outside of IRQ
* context.
*
* See also:
* drm_vblank_work_schedule()
* drm_vblank_work_init()
* drm_vblank_work_cancel_sync()
* drm_vblank_work_flush()
*/
struct drm_vblank_work {
/**
* @base: The base &kthread_work item which will be executed by
* &drm_vblank_crtc.worker. Drivers should not interact with this
* directly, and instead rely on drm_vblank_work_init() to initialize
* this.
*/
struct kthread_work base;
/**
* @vblank: A pointer to &drm_vblank_crtc this work item belongs to.
*/
struct drm_vblank_crtc *vblank;
/**
* @count: The target vblank this work will execute on. Drivers should
* not modify this value directly, and instead use
* drm_vblank_work_schedule()
*/
u64 count;
/**
* @cancelling: The number of drm_vblank_work_cancel_sync() calls that
* are currently running. A work item cannot be rescheduled until all
* calls have finished.
*/
int cancelling;
/**
* @node: The position of this work item in
* &drm_vblank_crtc.pending_work.
*/
struct list_head node;
};
/**
* to_drm_vblank_work - Retrieve the respective &drm_vblank_work item from a
* &kthread_work
* @_work: The &kthread_work embedded inside a &drm_vblank_work
*/
#define to_drm_vblank_work(_work) \
container_of((_work), struct drm_vblank_work, base)
int drm_vblank_work_schedule(struct drm_vblank_work *work,
u64 count, bool nextonmiss);
void drm_vblank_work_init(struct drm_vblank_work *work, struct drm_crtc *crtc,
void (*func)(struct kthread_work *work));
bool drm_vblank_work_cancel_sync(struct drm_vblank_work *work);
void drm_vblank_work_flush(struct drm_vblank_work *work);
#endif /* !_DRM_VBLANK_WORK_H_ */

View File

@ -47,7 +47,6 @@
#define TTM_MEMTYPE_FLAG_FIXED (1 << 0) /* Fixed (on-card) PCI memory */
#define TTM_MEMTYPE_FLAG_MAPPABLE (1 << 1) /* Memory mappable */
#define TTM_MEMTYPE_FLAG_CMA (1 << 3) /* Can't map aperture */
struct ttm_mem_type_manager;
@ -155,7 +154,6 @@ struct ttm_mem_type_manager_func {
* @use_io_reserve_lru: Use an lru list to try to unreserve io_mem_regions
* reserved by the TTM vm system.
* @io_reserve_lru: Optional lru list for unreserving io mem regions.
* @io_reserve_fastpath: Only use bdev::driver::io_mem_reserve to obtain
* @move_lock: lock for move fence
* static information. bdev::driver::io_mem_free is never used.
* @lru: The lru list for this memory type.
@ -184,7 +182,6 @@ struct ttm_mem_type_manager {
void *priv;
struct mutex io_reserve_mutex;
bool use_io_reserve_lru;
bool io_reserve_fastpath;
spinlock_t move_lock;
/*

Some files were not shown because too many files have changed in this diff Show More