Skip to content

Conversation

@evnchn
Copy link
Collaborator

@evnchn evnchn commented Nov 24, 2025

Motivation

The "speed-focused" evnchn is back with another performance-improving PR, introducing the concept of "implicit handshake".

This cuts the handshaking from 3 round-trips to 2 round-trips, hence the 33% faster time-to-first-interactivity

Implementation

Recap before this PR

The handshake consists of 3 sequential steps, each requiring a round-trip:

  1. EngineIO handshake
  2. SocketIO handshake
  3. NiceGUI's handshake message

The relevation

If SocketIO handshake takes query, why not:

  • Shove whatever we want to communicate in the 3rd step into the 2nd step?
  • Instead of returning true or false explicitly, keep the SocketIO connection open if success, or close it down if not.
  • Simply execute the code for the 3rd step's callback window.socket.emit("handshake", args, (ok) => {CODE} in 2nd step's callback

This PR does exactly that.

Cherry-on-tops

  • Functionality signature of _on_handshake kept: For backwards-compatibility with NiceGUI testing code. The code is more robust because it calls int explicitly.
  • Less code in client side
  • Cope with Python-SocketIO passing the 3rd parameter which we don't care with _=None

Progress

  • I chose a meaningful title that completes the sentence: "If applied, this PR will..."
  • The implementation is complete.
  • Pytests have been added (or are not necessary).
  • Documentation is not necessary (no functional differences, just faster).

@evnchn evnchn added the feature Type/scope: New feature or enhancement label Nov 24, 2025
@evnchn
Copy link
Collaborator Author

evnchn commented Nov 24, 2025

import time

from nicegui import app, ui

app.config.socket_io_js_transports = ['polling'] # Chrome does not simulate latency in WebSocket


@ui.page('/')
async def page():
    time_begin = time.time()
    await ui.context.client.connected()
    time_end = time.time()
    ui.label(f'Connection established in {time_end - time_begin:.2f} seconds.')


ui.run(show=False, reload=False)

Slow 4G, no disable cache, no CPU bottleneck (focus on network latency here)

Before PR: 1.93s
After PR: 1.31s

@evnchn
Copy link
Collaborator Author

evnchn commented Nov 24, 2025

Did we purge the client in test_prefetch_connects_after_navigation?

I have a suspicion that because the handshake is so efficient, that the client is actually connected in prefetch.

I printed len(Client.instances) and it does go down.

@evnchn
Copy link
Collaborator Author

evnchn commented Nov 24, 2025

bd986b6 gets test_prefetch_connects_after_navigation to pass by disabling this PR's new logic under prefetch mode.

While I would love to make the tests pass without the dual-mode in behaviour, I can't manage to do it... @rodja If you can, take it away, or maybe we can leave this for another PR (prefetch and prerender is so hard to debug 😭)

@falkoschindler falkoschindler added the review Status: PR is open and needs review label Nov 24, 2025
@falkoschindler falkoschindler added this to the 3.5 milestone Nov 24, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

feature Type/scope: New feature or enhancement review Status: PR is open and needs review

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants