Skip to content

Conversation

@lyakh
Copy link
Collaborator

@lyakh lyakh commented Dec 24, 2025

Clean up a couple of left-overs after the merge

Add a single information level log entry when the userspace DP thread
starts.

Signed-off-by: Guennadi Liakhovetski <[email protected]>
The module operation structure is located in module's memory, so it
is accessible to the thread, no need to copy it.

Signed-off-by: Guennadi Liakhovetski <[email protected]>
On the one hand IPCs are serialized, so a single IPC buffer for all
DP threads would be enough. But it has to be a page large to be added
to every DP thread memory domain. On the other hand we can allocate
such an IPC flattening buffer for each DP thread. Then it doesn't
need to be mapped separately, doesn't need an own memory partition in
thread's memory domain. A page is 4KiB, the buffer is probably less
than 100 bytes large. So as long as we don't have more than 40 DP
threads we're better off using per-thread buffers, and we aren't
likely to ever get that many DP threads.

Signed-off-by: Guennadi Liakhovetski <[email protected]>
The ppart partition pointer array isn't used any more, remove it.

Signed-off-by: Guennadi Liakhovetski <[email protected]>
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR cleans up leftover code from a previous merge, primarily refactoring how IPC flattening buffers are managed by moving from a static global buffer to a per-task allocated buffer embedded in the task memory structure.

Key Changes

  • Replaced static global ipc_buf with per-task flat buffer allocated as part of task memory structure
  • Removed SOF_DP_PART_IPC memory partition that is no longer needed
  • Fixed spelling error in comment ("binded" → "bound")

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.

File Description
src/schedule/zephyr_dp_schedule_application.c Removes static IPC buffer and associated partition, replaces with per-task buffer; adds log statement for thread startup; fixes comment spelling; removes unused code (ops, ppart array)
src/schedule/zephyr_dp_schedule.h Removes SOF_DP_PART_IPC enum value, adds ipc4_flat forward declaration and flat pointer field to task_dp_pdata struct

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.


/* The IPC thread is waiting for the thread to be started, it can proceed now. */
k_sem_give(&dp_sync[task->core]);
comp_info(pmod->dev, "userspace thread started");
Copy link

Copilot AI Dec 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The log message "userspace thread started" is misleading because this function is used for both userspace and kernel threads depending on the CONFIG_USERSPACE configuration and the K_USER flag. Consider using a more generic message like "DP thread started" to accurately reflect that this thread may be either a userspace or kernel thread.

Suggested change
comp_info(pmod->dev, "userspace thread started");
comp_info(pmod->dev, "DP thread started");

Copilot uses AI. Check for mistakes.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lyakh please confirm this is only ran in userspace given this is in dp_schedule_application?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kv2019i correct, the "application" DP implementation always runs in user-space. I should remove all #if CONFIG_USERSPACE from it.


/* The IPC thread is waiting for the thread to be started, it can proceed now. */
k_sem_give(&dp_sync[task->core]);
comp_info(pmod->dev, "userspace thread started");
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lyakh please confirm this is only ran in userspace given this is in dp_schedule_application?

Copy link
Member

@lgirdwood lgirdwood left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just some opens around the sink_source array handling.

int n_sources;
int n_sinks;
void *source_sink[];
void *source_sink[2 * CONFIG_MODULE_MAX_CONNECTIONS];
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume we x2 for sink and source ? If so, we should comment it.

sizeof(void *) * (param->pipeline_state.n_sources +
param->pipeline_state.n_sinks) >
sizeof(ipc_buf))
if (param->pipeline_state.n_sources > CONFIG_MODULE_MAX_CONNECTIONS ||
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

>= ? since we are using an array above void * sink_source[]

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lgirdwood sorry, not sure I understand? It's an array of pointers, why should == already be invalid?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in fact I have a follow-up to this, which converts the void *source_sink array to two arrays - sources and sinks separately. I've got a couple more clean-up changes, if I merge them all into this PR it would become more difficult to read. So maybe we can merge this and then I'll push the next one.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, that makes sense.

@lgirdwood lgirdwood merged commit c29d2dd into thesofproject:main Jan 5, 2026
60 of 67 checks passed
@lyakh lyakh deleted the app-cleanup branch January 6, 2026 06:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants