diff --git a/.gitignore b/.gitignore
index 517b9195a9..bc3f3e2809 100644
--- a/.gitignore
+++ b/.gitignore
@@ -47,3 +47,6 @@ libtailscale-sources.jar
.DS_Store
tailscale.version
+.opencode/
+.root/
+.envrc
diff --git a/AGENTS.md b/AGENTS.md
new file mode 100644
index 0000000000..2d273bc7c2
--- /dev/null
+++ b/AGENTS.md
@@ -0,0 +1,52 @@
+# AGENTS.md
+
+## Project Overview
+
+This repository contains the open source Tailscale Android client. Tailscale is a private WireGuard® network made easy. The Android client provides seamless VPN connectivity to Tailscale networks on Android devices.
+
+## Documentation Index
+
+The following Chinese documentation is available in the `docs/` directory:
+
+- [docs/01-项目指南.md](docs/01-项目指南.md) - Project overview, quick start, and usage instructions
+- [docs/02-开发指南.md](docs/02-开发指南.md) - Adding new features and development workflow
+- [docs/03-技术指南.md](docs/03-技术指南.md) - Architecture design, core components, and tech stack
+- [docs/04-更新日志.md](docs/04-更新日志.md) - Version updates and bug fixes
+
+## Common Commands
+
+- `make apk` - Build the debug APK
+- `make install` - Install the APK to a connected device
+- `make androidsdk` - Install necessary Android SDK components
+- `make docker-shell` - Start a Docker-based development shell
+- `make tag_release` - Bump Android version code, update version name, and tag commit
+
+## Architecture Highlights
+
+- Mixed Go and Android/Kotlin development
+- Go code compiled to JNI library for core Tailscale functionality
+- Standard Android project structure with Gradle build system
+- Support for multiple build environments: Android Studio, Docker, Nix
+
+## Documentation Maintenance Rules
+
+- **docs/ directory**: All documentation in the `docs/` directory must be written and maintained in Chinese.
+- **PROGRESS.md**: The `PROGRESS.md` file must be written and maintained in Chinese.
+- **AGENTS.md**: This file (AGENTS.md) must be written and maintained in English.
+
+### PROGRESS.md Rules
+
+`PROGRESS.md` is a sparse, append-only log for high-signal lessons learned, not a routine work log.
+
+- Only append entries after important bug fixes or significant changes.
+- Never record project initialization, scaffolding generation, documentation-only updates, formatting-only changes, routine configuration tweaks, or other low-signal work.
+- Each entry must be concise and include: problem, solution, prevention, and commitID.
+- The purpose is to help future AI agents and developers avoid repeating the same mistakes.
+
+## Development Workflow Rules
+
+- **Version Bump**: After modifying any code, the Android version code must be incremented by 1 in `android/build.gradle`.
+- **Build Verification**: After modifying any feature or implementation code, `make apk` must be run and complete successfully before the change can be considered successful.
+- **Device Validation**: After `make apk` succeeds for a code change, the updated APK must be installed onto a real Android device and the full end-to-end device test flow must pass before the change can be considered successful.
+- **Validation Executor**: `make apk`, APK installation, and real-device test execution must be delegated to the `execution_runner` subagent instead of the main agent to keep build and device-log noise out of the main context.
+- **Validation Model**: The `execution_runner` subagent used for build and device validation must use `gpt-5.4-mini` to minimize token usage.
diff --git a/PROGRESS.md b/PROGRESS.md
new file mode 100644
index 0000000000..311f31e75e
--- /dev/null
+++ b/PROGRESS.md
@@ -0,0 +1,32 @@
+# 经验教训记录
+
+> 每次遇到问题或完成重要改动后在此记录,必须附上 git commitID。
+> 仅记录重要 bug 修复或重大变更;不要记录初始化、脚手架生成、纯文档补全等噪音内容。
+
+---
+
+## 记录模板
+
+## [YYYY-MM-DD] 问题标题
+- **问题**: 描述问题现象、影响范围和触发原因
+- **解决**: 描述修复方式和关键改动
+- **避免**: 描述以后如何避免再次出现
+- **commitID**: `待填写:实际 commit hash`
+
+## [2026-04-05] Android 侧 SOCKS MVP 自动化验证闭环
+- **问题**: 初版 Android 侧 SOCKS5 MVP 虽然已能通过 adb 触发测试,但存在测试入口暴露面过大、多个场景共用同一 WorkManager unique work 导致结果互相覆盖、以及脚本无法通过退出码做机器判定的问题,影响自动化联调的稳定性与安全边界。
+- **解决**: 新增 `AdbTcpHttpTestContract` 与 `AdbTcpHttpTestWorker`,通过 `IPNReceiver` 提供 debug-only 的 `RUN_NETWORK_TEST` 入口,按 `requestId` 隔离 unique work,限制 `timeoutMs <= 10_000`,补齐 `tsocks-test-build/install/trigger/logs/pass-fail/run-all.sh` 脚本链路,追加中文开发说明,并完成 `DIRECT`、`TAILSCALE_NORMAL`、`TAILNET_SOCKS` 三类路径的真机 adb 验证。
+- **避免**: 后续新增 adb/debug harness 时,应同步设计入口收口、并发隔离、稳定日志字段与非 0 退出码,先把“可自动判定”和“不会误暴露到 release”作为基础约束,而不是事后补救。
+- **commitID**: `fe770e031305534946c1ebc1f7516db66b5dadbc`
+
+## [2026-04-07] phase-3.1a 最小规则化 TUN 内 TCP 分流原型
+- **问题**: phase-3 的真实数据面虽然已经能接管单个 `104.18.26.120:80` 出站 TCP flow,但规则匹配、`/32` route 注入和 gVisor proof-stack 拦截分别散落在 `tsocks.go`、`net.go`、`step0_tun.go`,导致“逻辑 allowlist”与“真实接管目标”割裂,无法稳定扩到多公网目标,也容易把 baseline 环境未就绪误判成回归。
+- **解决**: 新增集中式 `tsocks_rules.go`,用最小 `IP:port` / `IP:*` 规则表统一驱动 route 选择、`TAILNET_SOCKS` 的 `/32` 注入和 step0 多目标拦截;补充 `hostHeader` 与 `previewOnly` 调试字段,扩展 `phase3-public-http-a/b`、`phase3-public-no-match`、`phase3-wrong-port-entered-tun`、`phase3-recursion-guard` 场景,并让日志稳定输出 `matchedRule`、`selectedRoute`、`injectedRoute`、`offloadDecision`、`recursionGuard` 等机判字段;同时在 `run-all` 中为 phase-1 baseline 增加就绪探测,避免把联调服务未准备好误报成代码失败。
+- **避免**: 后续继续演进 tun 边界实验时,必须始终保持“规则源唯一、route 注入派生、数据面日志可机判”这三件事同步推进;同时要把 `/32` 注入只能精确到 IP 的语义边界写清楚,不要把 phase-3.1a 描述成真正的系统级 `IP:port` 透明分流。
+- **commitID**: `待填写:实际 commit hash`
+
+## [2026-04-12] phase-3.2 数据面可验证、可压测、可诊断工程原型
+- **问题**: phase-3.1a 虽然功能可用,但缺少稳定 `flow_id`、并发压测、TCP 生命周期观测、资源回收观测和可重复 baseline 测试服务,导致“能跑通”与“能验证/能诊断”之间仍有明显断层。
+- **解决**: 为 datapath 引入稳定 `flow_id`、`terminator_attach`/`socks_connect`/`relay_start`/`relay_end`/`conn_close` 等统一日志事件,补齐 `SYN/SYN-ACK/ACK/FIN/RST` 生命周期观测与 `activeRelays`/`goroutines`/`openFDs` 资源快照;新增动态 baseline 环境解析、host 侧 HTTP/TCP 测试服务、自启动与健康检查脚本,并补充 `phase32` 并发/错端口/lifecycle 验证脚本,完成真机 `PHASE32_PASS` 验证。
+- **避免**: 后续继续演进 datapath 时,任何“规则或 relay 行为改动”都必须同步维护三件事:稳定 flow 关联字段、可重复 baseline 环境、以及并发与 lifecycle 的自动机判脚本;不要再依赖单流人工观察来判断稳定性。
+- **commitID**: `待填写:实际 commit hash`
diff --git a/android/build.gradle b/android/build.gradle
index 8f64b3de8e..9c7c0da3ff 100644
--- a/android/build.gradle
+++ b/android/build.gradle
@@ -37,7 +37,7 @@ android {
defaultConfig {
minSdkVersion 26
targetSdkVersion 35
- versionCode 468
+ versionCode 503
versionName getVersionProperty("VERSION_LONG")
testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner"
diff --git a/android/src/main/AndroidManifest.xml b/android/src/main/AndroidManifest.xml
index 92cb0dea41..e9c83f08b8 100644
--- a/android/src/main/AndroidManifest.xml
+++ b/android/src/main/AndroidManifest.xml
@@ -90,6 +90,13 @@
+
+
diff --git a/android/src/main/java/com/tailscale/ipn/AdbTcpHttpTestContract.kt b/android/src/main/java/com/tailscale/ipn/AdbTcpHttpTestContract.kt
new file mode 100644
index 0000000000..fc435227ae
--- /dev/null
+++ b/android/src/main/java/com/tailscale/ipn/AdbTcpHttpTestContract.kt
@@ -0,0 +1,39 @@
+// Copyright (c) Tailscale Inc & AUTHORS
+// SPDX-License-Identifier: BSD-3-Clause
+package com.tailscale.ipn
+
+object AdbTcpHttpTestContract {
+ const val ACTION_RUN_TEST = "com.tailscale.ipn.RUN_NETWORK_TEST"
+ const val WORK_RUN_TEST = "ipn-run-network-test"
+
+ const val EXTRA_SCENARIO = "scenario"
+ const val EXTRA_REQUEST_ID = "requestId"
+ const val EXTRA_HOST = "host"
+ const val EXTRA_PORT = "port"
+ const val EXTRA_PROTOCOL = "protocol"
+ const val EXTRA_PATH = "path"
+ const val EXTRA_PAYLOAD = "payload"
+ const val EXTRA_HOST_HEADER = "hostHeader"
+ const val EXTRA_TIMEOUT_MS = "timeoutMs"
+ const val EXTRA_SOCKS_ENABLED = "socksEnabled"
+ const val EXTRA_PREVIEW_ONLY = "previewOnly"
+ const val EXTRA_URL = "url"
+
+ const val TAG_TEST = "TSOCKS_TEST"
+ const val TAG_ROUTE = "TSOCKS_ROUTE"
+ const val TAG_SOCKS = "TSOCKS_SOCKS"
+ const val TAG_DATAPATH = "TSOCKS_DATAPATH"
+
+ const val DEFAULT_PROTOCOL = "tcp"
+ const val DEFAULT_PATH = "/"
+ const val DEFAULT_TIMEOUT_MS = 5_000L
+ const val DEFAULT_SOCKS_ENABLED = true
+
+ const val LAN_HOST = "192.168.31.101"
+ const val TAILNET_LAB_HOST = "100.109.193.113"
+ const val TAILNET_DOMAIN_HOST = "wide-ts-wu"
+ const val SOCKS_SERVER_HOST = "100.78.63.77"
+ const val SOCKS_SERVER_PORT = 1080
+ const val PUBLIC_ALLOWLIST_HOST = "example.com"
+ const val PUBLIC_ALLOWLIST_PORT = 80
+}
diff --git a/android/src/main/java/com/tailscale/ipn/AdbTcpHttpTestWorker.kt b/android/src/main/java/com/tailscale/ipn/AdbTcpHttpTestWorker.kt
new file mode 100644
index 0000000000..f8ccd48a2a
--- /dev/null
+++ b/android/src/main/java/com/tailscale/ipn/AdbTcpHttpTestWorker.kt
@@ -0,0 +1,95 @@
+// Copyright (c) Tailscale Inc & AUTHORS
+// SPDX-License-Identifier: BSD-3-Clause
+package com.tailscale.ipn
+
+import android.content.Context
+import androidx.work.CoroutineWorker
+import androidx.work.Data
+import androidx.work.WorkerParameters
+import com.tailscale.ipn.util.TSLog
+import kotlinx.serialization.SerialName
+import kotlinx.serialization.Serializable
+import kotlinx.serialization.encodeToString
+import kotlinx.serialization.json.Json
+import org.json.JSONObject
+
+class AdbTcpHttpTestWorker(appContext: Context, workerParams: WorkerParameters) :
+ CoroutineWorker(appContext, workerParams) {
+
+ override suspend fun doWork(): Result {
+ val request = ProbeRequest.from(inputData)
+
+ return runCatching {
+ val response = App.get().getLibtailscaleApp().runTsocksProbe(Json.encodeToString(request))
+ val json = JSONObject(response)
+ Result.success(
+ Data.Builder()
+ .putString("route", json.optString("route", "UNKNOWN"))
+ .putString("matchedRule", json.optString("matchedRule", "unknown"))
+ .putInt("bytesSent", json.optInt("bytesSent", 0))
+ .putInt("bytesReceived", json.optInt("bytesReceived", 0))
+ .putString("detail", json.optString("detail", ""))
+ .build())
+ }
+ .getOrElse { error ->
+ TSLog.e(
+ AdbTcpHttpTestContract.TAG_TEST,
+ "event=TEST_FAIL requestId=${request.requestId} scenario=${request.scenario} route=UNKNOWN reason=${sanitize(error.message ?: error.javaClass.simpleName)}")
+ Result.failure(
+ Data.Builder().putString("reason", error.message ?: error.javaClass.simpleName).build())
+ }
+ }
+
+ @Serializable
+ private data class ProbeRequest(
+ @SerialName("scenario") val scenario: String,
+ @SerialName("requestId") val requestId: String,
+ @SerialName("host") val host: String,
+ @SerialName("port") val port: Int,
+ @SerialName("protocol") val protocol: String,
+ @SerialName("path") val path: String,
+ @SerialName("payload") val payload: String,
+ @SerialName("hostHeader") val hostHeader: String,
+ @SerialName("timeoutMs") val timeoutMs: Int,
+ @SerialName("socksEnabled") val socksEnabled: Boolean,
+ @SerialName("previewOnly") val previewOnly: Boolean,
+ ) {
+ companion object {
+ fun from(data: Data): ProbeRequest {
+ return ProbeRequest(
+ scenario = data.getString(AdbTcpHttpTestContract.EXTRA_SCENARIO)?.trim().orEmpty().ifEmpty { "unspecified" },
+ requestId =
+ data.getString(AdbTcpHttpTestContract.EXTRA_REQUEST_ID)?.trim().orEmpty().ifEmpty {
+ "req-${System.currentTimeMillis()}"
+ },
+ host = data.getString(AdbTcpHttpTestContract.EXTRA_HOST)?.trim().orEmpty(),
+ port = data.getInt(AdbTcpHttpTestContract.EXTRA_PORT, -1),
+ protocol =
+ data.getString(AdbTcpHttpTestContract.EXTRA_PROTOCOL)
+ ?.trim()
+ ?.lowercase()
+ .orEmpty()
+ .ifEmpty { AdbTcpHttpTestContract.DEFAULT_PROTOCOL },
+ path = data.getString(AdbTcpHttpTestContract.EXTRA_PATH)?.trim().orEmpty(),
+ payload = data.getString(AdbTcpHttpTestContract.EXTRA_PAYLOAD).orEmpty(),
+ hostHeader = data.getString(AdbTcpHttpTestContract.EXTRA_HOST_HEADER)?.trim().orEmpty(),
+ timeoutMs =
+ data.getLong(
+ AdbTcpHttpTestContract.EXTRA_TIMEOUT_MS,
+ AdbTcpHttpTestContract.DEFAULT_TIMEOUT_MS)
+ .coerceIn(1L, 10_000L)
+ .toInt(),
+ socksEnabled =
+ data.getBoolean(
+ AdbTcpHttpTestContract.EXTRA_SOCKS_ENABLED,
+ AdbTcpHttpTestContract.DEFAULT_SOCKS_ENABLED),
+ previewOnly = data.getBoolean(AdbTcpHttpTestContract.EXTRA_PREVIEW_ONLY, false),
+ )
+ }
+ }
+ }
+
+ private fun sanitize(value: String): String {
+ return value.replace(Regex("\\s+"), "_").replace(Regex("[^a-zA-Z0-9_./:=-]"), "-")
+ }
+}
diff --git a/android/src/main/java/com/tailscale/ipn/DatapathTestActivity.kt b/android/src/main/java/com/tailscale/ipn/DatapathTestActivity.kt
new file mode 100644
index 0000000000..613106e28c
--- /dev/null
+++ b/android/src/main/java/com/tailscale/ipn/DatapathTestActivity.kt
@@ -0,0 +1,114 @@
+// Copyright (c) Tailscale Inc & AUTHORS
+// SPDX-License-Identifier: BSD-3-Clause
+package com.tailscale.ipn
+
+import android.app.Activity
+import android.os.Bundle
+import com.tailscale.ipn.util.TSLog
+import java.net.Socket
+import java.net.URI
+import java.nio.charset.StandardCharsets
+import kotlinx.coroutines.CoroutineScope
+import kotlinx.coroutines.Dispatchers
+import kotlinx.coroutines.SupervisorJob
+import kotlinx.coroutines.cancel
+import kotlinx.coroutines.launch
+import kotlinx.coroutines.withContext
+
+class DatapathTestActivity : Activity() {
+ private val scope = CoroutineScope(SupervisorJob() + Dispatchers.Main)
+
+ override fun onCreate(savedInstanceState: Bundle?) {
+ super.onCreate(savedInstanceState)
+ if (!BuildConfig.DEBUG) {
+ finish()
+ return
+ }
+
+ val scenario = intent.getStringExtra(AdbTcpHttpTestContract.EXTRA_SCENARIO)?.trim().orEmpty()
+ val requestId =
+ intent.getStringExtra(AdbTcpHttpTestContract.EXTRA_REQUEST_ID)?.trim().orEmpty().ifEmpty {
+ "req-${System.currentTimeMillis()}"
+ }
+ val url = intent.getStringExtra(AdbTcpHttpTestContract.EXTRA_URL)?.trim().orEmpty()
+ val timeoutMs =
+ intent.getLongExtra(
+ AdbTcpHttpTestContract.EXTRA_TIMEOUT_MS, AdbTcpHttpTestContract.DEFAULT_TIMEOUT_MS)
+ .coerceIn(1L, 10_000L)
+ .toInt()
+
+ scope.launch {
+ if (url.isEmpty()) {
+ TSLog.e(
+ AdbTcpHttpTestContract.TAG_TEST,
+ "event=TEST_FAIL requestId=$requestId scenario=$scenario route=DATAPATH reason=missing_url")
+ finish()
+ return@launch
+ }
+
+ TSLog.d(
+ AdbTcpHttpTestContract.TAG_TEST,
+ "event=request_start requestId=$requestId scenario=$scenario protocol=http url=${sanitize(url)} flow=datapath-client")
+
+ val result =
+ withContext(Dispatchers.IO) {
+ runCatching {
+ val uri = URI(url)
+ val host = uri.host ?: throw IllegalArgumentException("missing_host")
+ val port = if (uri.port == -1) 80 else uri.port
+ val path = if (uri.rawPath.isNullOrBlank()) "/" else uri.rawPath
+ val socket = Socket()
+ socket.connect(java.net.InetSocketAddress(host, port), timeoutMs)
+ socket.soTimeout = timeoutMs
+ val request =
+ buildString {
+ append("GET ")
+ append(path)
+ append(" HTTP/1.1\r\n")
+ append("Host: ")
+ append(host)
+ append("\r\nConnection: close\r\n")
+ append("User-Agent: tailscale-android-tsocks-datapath-test\r\n\r\n")
+ }
+ .toByteArray(StandardCharsets.UTF_8)
+ socket.getOutputStream().write(request)
+ socket.getOutputStream().flush()
+ val response = socket.getInputStream().readBytes()
+ socket.close()
+ val statusLine = response.toString(StandardCharsets.UTF_8).lineSequence().firstOrNull()?.trim().orEmpty()
+ val status = statusLine.split(' ').getOrNull(1)?.toIntOrNull() ?: 0
+ val bodyBytes = response
+ Triple(status in 200..399, status, bodyBytes.size)
+ }
+ }
+
+ result.fold(
+ onSuccess = { (success, status, bodySize) ->
+ if (success) {
+ TSLog.d(
+ AdbTcpHttpTestContract.TAG_TEST,
+ "event=TEST_PASS requestId=$requestId scenario=$scenario route=DATAPATH protocol=http bytesSent=0 bytesReceived=$bodySize detail=http_status_$status")
+ } else {
+ TSLog.e(
+ AdbTcpHttpTestContract.TAG_TEST,
+ "event=TEST_FAIL requestId=$requestId scenario=$scenario route=DATAPATH reason=http_status_$status")
+ }
+ },
+ onFailure = { error ->
+ TSLog.e(
+ AdbTcpHttpTestContract.TAG_TEST,
+ "event=TEST_FAIL requestId=$requestId scenario=$scenario route=DATAPATH reason=${sanitize(error.message ?: error.javaClass.simpleName)}")
+ })
+ finish()
+ }
+ }
+
+ override fun onDestroy() {
+ super.onDestroy()
+ scope.cancel()
+ }
+
+ private fun sanitize(value: String): String {
+ return value.replace(Regex("\\s+"), "_").replace(Regex("[^a-zA-Z0-9_./:=-]"), "-")
+ }
+}
diff --git a/android/src/main/java/com/tailscale/ipn/IPNReceiver.java b/android/src/main/java/com/tailscale/ipn/IPNReceiver.java
index 87ab33c023..86762ad665 100644
--- a/android/src/main/java/com/tailscale/ipn/IPNReceiver.java
+++ b/android/src/main/java/com/tailscale/ipn/IPNReceiver.java
@@ -33,6 +33,10 @@ public class IPNReceiver extends BroadcastReceiver {
public void onReceive(Context context, Intent intent) {
if (intent == null) return;
+ if (Objects.equals(intent.getAction(), AdbTcpHttpTestContract.ACTION_RUN_TEST) && !BuildConfig.DEBUG) {
+ return;
+ }
+
final WorkManager workManager = WorkManager.getInstance(context);
final String action = intent.getAction();
@@ -72,6 +76,38 @@ public void onReceive(Context context, Intent intent) {
.build();
workManager.enqueueUniqueWork(WORK_USE_EXIT_NODE, ExistingWorkPolicy.REPLACE, req);
+ } else if (Objects.equals(action, AdbTcpHttpTestContract.ACTION_RUN_TEST)) {
+ String requestId = intent.getStringExtra(AdbTcpHttpTestContract.EXTRA_REQUEST_ID);
+ if (requestId == null || requestId.trim().isEmpty()) {
+ requestId = String.valueOf(System.currentTimeMillis());
+ }
+ Data input =
+ new Data.Builder()
+ .putString(AdbTcpHttpTestContract.EXTRA_SCENARIO, intent.getStringExtra(AdbTcpHttpTestContract.EXTRA_SCENARIO))
+ .putString(AdbTcpHttpTestContract.EXTRA_REQUEST_ID, requestId)
+ .putString(AdbTcpHttpTestContract.EXTRA_HOST, intent.getStringExtra(AdbTcpHttpTestContract.EXTRA_HOST))
+ .putInt(AdbTcpHttpTestContract.EXTRA_PORT, intent.getIntExtra(AdbTcpHttpTestContract.EXTRA_PORT, -1))
+ .putString(AdbTcpHttpTestContract.EXTRA_PROTOCOL, intent.getStringExtra(AdbTcpHttpTestContract.EXTRA_PROTOCOL))
+ .putString(AdbTcpHttpTestContract.EXTRA_PATH, intent.getStringExtra(AdbTcpHttpTestContract.EXTRA_PATH))
+ .putString(AdbTcpHttpTestContract.EXTRA_PAYLOAD, intent.getStringExtra(AdbTcpHttpTestContract.EXTRA_PAYLOAD))
+ .putString(AdbTcpHttpTestContract.EXTRA_HOST_HEADER, intent.getStringExtra(AdbTcpHttpTestContract.EXTRA_HOST_HEADER))
+ .putLong(AdbTcpHttpTestContract.EXTRA_TIMEOUT_MS, intent.getLongExtra(AdbTcpHttpTestContract.EXTRA_TIMEOUT_MS, AdbTcpHttpTestContract.DEFAULT_TIMEOUT_MS))
+ .putBoolean(AdbTcpHttpTestContract.EXTRA_SOCKS_ENABLED, intent.getBooleanExtra(AdbTcpHttpTestContract.EXTRA_SOCKS_ENABLED, AdbTcpHttpTestContract.DEFAULT_SOCKS_ENABLED))
+ .putBoolean(AdbTcpHttpTestContract.EXTRA_PREVIEW_ONLY, intent.getBooleanExtra(AdbTcpHttpTestContract.EXTRA_PREVIEW_ONLY, false))
+ .build();
+
+ OneTimeWorkRequest req =
+ new OneTimeWorkRequest.Builder(AdbTcpHttpTestWorker.class)
+ .setInputData(input)
+ .setExpedited(OutOfQuotaPolicy.RUN_AS_NON_EXPEDITED_WORK_REQUEST)
+ .addTag(AdbTcpHttpTestContract.WORK_RUN_TEST)
+ .addTag(requestId)
+ .build();
+
+ workManager.enqueueUniqueWork(
+ AdbTcpHttpTestContract.WORK_RUN_TEST + "-" + requestId,
+ ExistingWorkPolicy.REPLACE,
+ req);
}
}
}
diff --git "a/docs/01-\351\241\271\347\233\256\346\214\207\345\215\227.md" "b/docs/01-\351\241\271\347\233\256\346\214\207\345\215\227.md"
new file mode 100644
index 0000000000..9a36d987ce
--- /dev/null
+++ "b/docs/01-\351\241\271\347\233\256\346\214\207\345\215\227.md"
@@ -0,0 +1,55 @@
+# 01-项目指南
+
+## 项目概述
+
+本仓库包含 Tailscale Android 客户端的开源代码。Tailscale 让私有 WireGuard® 网络变得简单易用。Android 客户端为 Android 设备提供了到 Tailscale 网络的无缝 VPN 连接。
+
+## 快速开始
+
+### 环境准备
+
+构建 Tailscale Android 客户端需要以下工具:
+
+- Go 运行时环境
+- Android SDK
+- Android SDK 组件(运行 `make androidsdk` 可安装)
+
+### 使用 Android Studio 开发
+
+1. 安装 Go 运行时环境(https://go.dev/dl/)
+2. 安装 Android Studio(https://developer.android.com/studio)
+3. 启动 Android Studio,从欢迎界面选择 "More Actions" 和 "SDK Manager"
+4. 在 SDK 管理器中,选择 "SDK Tools" 标签并安装 "Android SDK Command-line Tools (latest)"
+5. 运行 `make androidsdk` 安装必要的 SDK 组件
+
+### 使用 Docker 开发
+
+如果希望避免在主机系统上安装软件,可以使用基于 Docker 的开发环境:
+
+```sh
+make docker-shell
+```
+
+### 使用 Nix 开发
+
+如果已安装 Nix 2.4 或更高版本,可以使用 Nix 开发环境:
+
+```sh
+alias nix='nix --extra-experimental-features "nix-command flakes"'
+nix develop
+```
+
+## 构建与安装
+
+```sh
+make apk
+make install
+```
+
+## 使用说明
+
+应用可以从以下平台获取:
+
+- Google Play Store
+- Amazon Appstore(适用于 Amazon Fire 平板和 Fire TV 设备)
+- F-Droid(独立构建版本)
diff --git "a/docs/02-\345\274\200\345\217\221\346\214\207\345\215\227.md" "b/docs/02-\345\274\200\345\217\221\346\214\207\345\215\227.md"
new file mode 100644
index 0000000000..67aac26ed1
--- /dev/null
+++ "b/docs/02-\345\274\200\345\217\221\346\214\207\345\215\227.md"
@@ -0,0 +1,184 @@
+# 02-开发指南
+
+## 添加新功能
+
+### 代码结构
+
+项目采用混合 Go 和 Android/Kotlin 开发架构:
+
+- Go 代码编译为 JNI 库,提供核心 Tailscale 功能
+- Android/Kotlin 代码处理 UI 和平台集成
+
+### 开发工作流程
+
+1. 确保开发环境已正确设置
+2. 创建功能分支进行开发
+3. 实现代码变更
+4. 运行构建测试:`make apk`
+5. 在设备上测试:`make install`
+6. 提交变更(需要 Signed-off-by 行)
+
+### 代码格式化
+
+- Java、Kotlin 和 XML 文件:使用 Android Studio 中的 ktmft 插件,默认设置并启用 "保存时格式化"
+- Go 代码:遵循标准 Go 格式化规范
+
+## 发布构建
+
+使用 `make tag_release` 来提升 Android 版本代码、更新版本名称并标记当前提交。
+
+## Fire Stick TV 开发
+
+在 Fire Stick 上:
+
+* 设置 > 我的 Fire TV > 开发者选项 > ADB 调试 > 开启
+
+一些有用的命令:
+```
+adb connect 10.2.200.213:5555
+adb install -r tailscale-fdroid.apk
+adb shell am start -n com.tailscale.ipn/com.tailscale.ipn.MainActivity
+adb shell pm uninstall com.tailscale.ipn
+```
+
+## Android 侧 TCP/HTTP MVP 测试
+
+本仓库提供一个仅用于 adb 触发的 Android 侧 MVP 测试通道,复用 `IPNReceiver` + `WorkManager`,不修改 UI,也不改全局 VPN/TUN 路由行为。
+
+当前阶段正式定义为:`phase-3.2 = 数据面可验证、可压测、可诊断的工程原型`。
+
+### 构建与安装
+
+```sh
+sh scripts/tsocks-test-build.sh
+sh scripts/tsocks-test-install.sh
+```
+
+### baseline 测试服务
+
+```sh
+sh scripts/tsocks-test-services-start.sh
+sh scripts/tsocks-test-services-health.sh
+sh scripts/tsocks-test-services-stop.sh
+```
+
+phase-3.2 默认通过 `scripts/tsocks-test-env.sh` 自动解析当前机器的 LAN IPv4 与 tailnet IPv4,并把 baseline 场景指向当前机器本地测试服务。
+
+如需覆写,可设置:
+
+```sh
+TSOCKS_TEST_LAN_HOST=
+TSOCKS_TEST_TAILNET_HOST=
+```
+
+### 触发单项测试
+
+```sh
+sh scripts/tsocks-test-trigger.sh lan-http
+sh scripts/tsocks-test-trigger.sh tailnet-http
+sh scripts/tsocks-test-trigger.sh lan-tcp
+sh scripts/tsocks-test-trigger.sh tailnet-tcp
+sh scripts/tsocks-test-trigger.sh tailnet-tcp-close
+sh scripts/tsocks-test-trigger.sh tailnet-tcp-rst
+sh scripts/tsocks-test-trigger.sh public-http
+sh scripts/tsocks-test-trigger.sh phase3-public-http-a
+sh scripts/tsocks-test-trigger.sh phase3-public-http-b
+sh scripts/tsocks-test-trigger.sh phase3-public-no-match
+sh scripts/tsocks-test-trigger.sh phase3-wrong-port-entered-tun
+sh scripts/tsocks-test-trigger.sh phase3-recursion-guard
+```
+
+其中:
+
+- `lan-http` / `lan-tcp` 走 `DIRECT`
+- `tailnet-http` / `tailnet-tcp` 走 `TAILSCALE_NORMAL`
+- `public-http` 仅在精确匹配 `example.com:80` 时走 `TAILNET_SOCKS`,通过固定 SOCKS5 服务器 `100.78.63.77:1080`
+- `phase3-public-http-a` / `phase3-public-http-b` 会分别对 `104.18.26.120:80`、`104.18.27.120:80` 对应的目标 IP 自动注入 `/32` VPN 路由,并在流量进入 `tun0` 后由 `step0_tun` 按 `IP:port` 规则判断;命中后由 gVisor terminator 接管并通过 tailnet SOCKS5 转发;请求仍使用 `Host: example.com`
+- `phase3-public-no-match` 会对 `104.18.4.106:80` 发起 HTTP 请求并显式带 `Host: example.net`,用于验证未命中白名单时保持 `DIRECT`,且不会误走 SOCKS
+- `phase3-wrong-port-entered-tun` 会触发 `104.18.26.120:81`,用于复现“规则未命中但因 `/32` 已注入仍进入 `tun0`”的 phase-3.2 语义边界;这是预期行为,不是 bug
+- `phase3-recursion-guard` 会对 `100.78.63.77:1080` 做 preview-only 路由校验,确保 SOCKS 服务器自身始终走 `DIRECT`,避免递归代理
+- `tailnet-tcp-close` 会向 tailnet baseline TCP 服务发送 `CLOSE`,用于验证服务端主动关闭
+- `tailnet-tcp-rst` 会向 tailnet baseline TCP 服务发送 `RST`,用于验证异常关闭路径
+- `RUN_NETWORK_TEST` 仅在 `BuildConfig.DEBUG=true` 的构建中生效,用于收敛测试入口暴露面
+
+### 查看日志
+
+```sh
+sh scripts/tsocks-test-logs.sh
+sh scripts/tsocks-test-pass-fail.sh
+```
+
+关键日志标签:
+
+- `TSOCKS_TEST`:请求开始、目标连接、收发结果、最终 `TEST_PASS` / `TEST_FAIL`
+- `TSOCKS_ROUTE`:匹配规则与最终路由选择
+- `TSOCKS_SOCKS`:SOCKS5 服务器连接与 CONNECT 握手结果
+- `TSOCKS_DATAPATH`:真实数据面 flow 识别、SYN/SYN-ACK/ACK、FIN/RST、terminator attach、relay 生命周期、字节统计与连接关闭
+
+日志采用 `key=value` 风格,便于 `grep 'event=TEST_PASS'` 或 `grep 'event=TEST_FAIL'` 做自动汇总。
+
+其中与 phase-3.2 机判直接相关的字段至少包括:`flow_id`、`dst`、`matchedRule`、`selectedRoute`、`injectedRoute`、`entered_tun_due_to_/32`、`offloadDecision`、`offloadReason`、`recursionGuard`、`activeRelays`、`goroutines`、`openFDs`。
+
+### 一键运行并汇总
+
+```sh
+sh scripts/tsocks-test-run-all.sh
+BUILD_FIRST=false INSTALL_FIRST=false sh scripts/tsocks-test-phase32.sh
+```
+
+脚本默认会自动执行 build、install、`CONNECT_VPN`、逐项触发测试、拉取日志并输出 PASS/FAIL 汇总;若任一场景失败,脚本会返回非 0 退出码。
+
+`run-all` 在正式触发前会先对 phase-1 baseline 的 4 个外部依赖端点做就绪探测;如果 LAN/tailnet 测试服务未准备好,脚本会直接输出 `ENV_NOT_READY ` 并提前退出,避免把联调环境问题误判成代码回归。
+
+`phase32` 会在 `CONNECT_VPN` 后先等待 `lan-http`、`tailnet-http`、`lan-tcp`、`tailnet-tcp` ready,再依次执行 baseline clean check、并发压测、`/32` 错端口边界验证与生命周期验证。
+
+当前 `run-all` 默认覆盖:
+
+- phase-1 基线:`lan-http`、`tailnet-http`、`lan-tcp`、`tailnet-tcp`、`public-http`
+- phase-3 positive:`phase3-public-http-a`、`phase3-public-http-b`
+- phase-3 negative:`phase3-public-no-match`、`phase3-wrong-port-entered-tun`、`phase3-recursion-guard`
+
+如需跳过某一步,可用环境变量:
+
+```sh
+BUILD_FIRST=false INSTALL_FIRST=false CONNECT_VPN_FIRST=false sh scripts/tsocks-test-run-all.sh
+```
+
+如需临时关闭 SOCKS 路径验证,可在触发时传入:
+
+```sh
+SOCKS_ENABLED=false sh scripts/tsocks-test-trigger.sh public-http
+```
+
+这样会保持同一测试目标,但路由判定应落到 `DIRECT`,便于验证实验性总开关行为。
+
+### 已实现能力
+
+- 对命中 `TAILNET_SOCKS` 的目标 IP 自动注入 `/32 route`
+- 流量进入 `tun0` 后,在 `step0_tun` 数据面内按 `IP:port` 做规则判断
+- 命中规则的流量由 gVisor terminator 接管并通过 tailnet SOCKS5 转发
+- 未命中规则的流量仍会被记录为 `DIRECT` / `TAILSCALE_NORMAL`,并带出 `injectedRoute`、`offloadDecision`、`offloadReason`、`recursionGuard` 等机判字段
+- 每个真实数据面 flow 都会分配稳定 `flow_id`,并输出 `route_decision`、`flow_identified`、`terminator_attach`、`socks_connect`、`relay_start`、`relay_end`、`conn_close`
+- 数据面会额外观测 `syn_received`、`synack_sent`、`ack_seen`、`fin_seen` / `finack_seen`、`rst_seen`
+- `relay_start` / `relay_end` 会带出 `activeRelays`、`goroutines`、`openFDs`,用于发现 goroutine / fd 泄漏
+
+### 未实现能力
+
+- 没有实现“同一 IP 的其他端口真正保持系统级 DIRECT”
+- 没有实现域名规则、UDP、QUIC/HTTP3、GeoIP/GeoSite、IPv6、UI 或持久化配置
+- `phase3-recursion-guard` 当前仍是 preview-only 路由验证,不是完整的真实数据面递归回归测试
+
+### 已知限制
+
+- 当前阶段是 `phase-3.2`:数据面可验证、可压测、可诊断的工程原型;仍然不是“真正精确 IP:port 直连/代理分流”。
+- 只有命中 `TAILNET_SOCKS` 的公网目标会被自动注入精确 `/32` VPN route 进入 `tun0`;不会泛化成大网段。
+- `phase3-recursion-guard` 目前采用 preview-only 路由校验,重点验证规则优先级与防递归,不等同于真实数据面转发。
+- 由于 Android `VpnService.Builder.AddRoute` 的最小粒度是前缀而不是端口,当前 `/32` 注入模型只能精确到 IP,不能让同一 IP 的非白名单端口真正保持系统 `DIRECT`;例如 `104.18.26.120:81` 仍会进入 `tun0` 并落到 `selectedRoute=DIRECT injectedRoute=true entered_tun_due_to_/32=true offloadDecision=bypass offloadReason=RULE_NOT_MATCHED_BUT_ENTERED_TUN_DUE_TO_/32 expectedBehavior=true` 的观测状态。这是预期行为,不是 bug。
+- baseline 的 LAN / tailnet 目标地址默认来自 `scripts/tsocks-test-env.sh` 的动态解析;如果本机网络环境变化,需要优先检查 `TSOCKS_TEST_LAN_HOST` / `TSOCKS_TEST_TAILNET_HOST` 是否正确。
+
+复现实验:
+
+```sh
+SERIAL= sh scripts/tsocks-test-trigger.sh phase3-wrong-port-entered-tun
+SERIAL= sh scripts/tsocks-test-logs.sh | grep '104.18.26.120:81'
+```
diff --git "a/docs/03-\346\212\200\346\234\257\346\214\207\345\215\227.md" "b/docs/03-\346\212\200\346\234\257\346\214\207\345\215\227.md"
new file mode 100644
index 0000000000..7170b5df26
--- /dev/null
+++ "b/docs/03-\346\212\200\346\234\257\346\214\207\345\215\227.md"
@@ -0,0 +1,72 @@
+# 03-技术指南
+
+## 架构设计
+
+Tailscale Android 客户端采用分层架构:
+
+1. **Go 核心层**:实现 Tailscale 协议和 WireGuard 集成
+2. **JNI 桥接层**:连接 Go 代码和 Android 运行时
+3. **Android 应用层**:处理 UI、系统服务和平台集成
+
+## 核心组件
+
+- **libtailscale**:Go 代码编译的 AAR 库,包含核心 Tailscale 功能
+- **Android UI**:标准 Android Activity 和 Fragment 架构
+- **VPN 服务**:Android VpnService 实现
+- **tsocks datapath harness**:基于 `IPNReceiver` + `WorkManager` 的 adb 调试入口,用于 phase-3.x 数据面验证
+- **gVisor terminator**:在 `step0_tun` 中接管命中的 TUN 内 TCP 流并转发到 tailnet SOCKS5
+- **host 测试服务**:phase-3.2 引入的本地 HTTP/TCP baseline 服务,用于 clean baseline、生命周期与压测场景
+
+## 技术栈
+
+- **编程语言**:Go、Kotlin/Java
+- **构建系统**:Make、Gradle
+- **网络协议**:WireGuard、Tailscale
+- **最低支持版本**:Android SDK 34
+- **NDK 版本**:23.1.7779620
+
+## 构建系统
+
+项目使用 Makefile 作为主要构建入口,协调 Go 编译和 Android Gradle 构建过程。主要构建产物包括:
+
+- Debug APK:`tailscale-debug.apk`
+- Release AAB:`tailscale-release.aab`
+- TV Release AAB:`tailscale-tv-release.aab`
+
+## phase-3.2 datapath 原型
+
+phase-3.2 的目标不是扩展新功能,而是把现有 `TAILNET_SOCKS` TCP 实验路径提升为可验证、可压测、可诊断的工程原型。
+
+### 规则与转发
+
+- 规则源集中在 `libtailscale/tsocks_rules.go`
+- 对命中 `TAILNET_SOCKS` 的公网目标自动注入 `/32` route
+- 流量进入 `tun0` 后,在 `libtailscale/step0_tun.go` 中按 `IP:port` 做数据面判定
+- 命中规则后,由 gVisor terminator 接管,并通过 tailnet SOCKS5 转发
+
+### 观测能力
+
+- 每个真实数据面 flow 使用稳定 `flow_id`
+- 关键事件统一输出到 `TSOCKS_DATAPATH` / `TSOCKS_SOCKS`
+- 典型生命周期事件包括:
+ - `route_decision`
+ - `flow_identified`
+ - `syn_received` / `synack_sent` / `ack_seen`
+ - `terminator_attach`
+ - `socks_connect`
+ - `relay_start` / `relay_end`
+ - `fin_seen` / `finack_seen` / `rst_seen`
+ - `conn_close`
+- `relay_start` / `relay_end` 会同时输出 `activeRelays`、`goroutines`、`openFDs`,用于发现 goroutine / fd 泄漏
+
+### baseline 与压测
+
+- `scripts/tsocks-test-env.sh` 会自动解析当前机器的 LAN/tailnet 地址
+- `scripts/tsocks_test_server.py` 提供本地 HTTP/TCP baseline 服务
+- `scripts/tsocks-test-phase32.sh` 提供 phase-3.2 验收脚本,覆盖 baseline、并发、错端口边界和 TCP 生命周期
+
+### 已知边界
+
+- 路由模型仍然是 `/32`,不做系统级精确 `IP:port` 分流
+- `104.18.26.120:81` 这类错端口流量仍会进入 `tun0`,但会以 `selectedRoute=DIRECT` + `offloadDecision=bypass` 收口
+- 这属于预期行为,不是 bug
diff --git "a/docs/04-\346\233\264\346\226\260\346\227\245\345\277\227.md" "b/docs/04-\346\233\264\346\226\260\346\227\245\345\277\227.md"
new file mode 100644
index 0000000000..b7a5f8b511
--- /dev/null
+++ "b/docs/04-\346\233\264\346\226\260\346\227\245\345\277\227.md"
@@ -0,0 +1,12 @@
+# 04-更新日志
+
+## 版本更新记录
+
+(此文档用于记录版本更新和 Bug 修复,请在每次发布时更新)
+
+### 待发布
+- phase-3.2:将 `TAILNET_SOCKS` TCP 实验路径升级为“数据面可验证、可压测、可诊断”的工程原型
+- 新增稳定 `flow_id`、`terminator_attach` / `socks_connect` / `relay_start` / `relay_end` / `conn_close` 生命周期日志
+- 新增 `SYN/SYN-ACK/ACK/FIN/RST` 数据面观测与 `activeRelays` / `goroutines` / `openFDs` 资源快照
+- 新增 host 侧 baseline HTTP/TCP 测试服务、动态 LAN/tailnet 地址解析与健康检查脚本
+- 新增 `phase32` 真机验收脚本,覆盖 baseline、并发压测、错端口 `/32` 边界与 lifecycle 验证
diff --git a/libtailscale/backend.go b/libtailscale/backend.go
index 031bb0ef84..f2f32b1b62 100644
--- a/libtailscale/backend.go
+++ b/libtailscale/backend.go
@@ -56,6 +56,7 @@ type App struct {
localAPIHandler http.Handler
backend *ipnlocal.LocalBackend
+ tsocks *tsocksController
ready sync.WaitGroup
backendMu sync.Mutex
}
@@ -99,6 +100,7 @@ type backend struct {
logIDPublic logid.PublicID
logger *logtail.Logger
+ tsocks *tsocksController
bus *eventbus.Bus
@@ -142,6 +144,7 @@ func (a *App) runBackend(ctx context.Context, hardwareAttestation bool) error {
}
a.logIDPublicAtomic.Store(&b.logIDPublic)
a.backend = b.backend
+ a.tsocks = b.tsocks
if hardwareAttestation {
a.backend.SetHardwareAttested()
}
@@ -301,6 +304,7 @@ func (a *App) newBackend(dataDir string, appCtx AppContext, store *stateStore,
b.netMon = netMon
b.setupLogs(dataDir, logID, logf, sys.HealthTracker.Get())
dialer := new(tsdial.Dialer)
+ b.tsocks = newTSocksController(appCtx, dialer)
vf := &VPNFacade{
SetBoth: b.setCfg,
GetBaseConfigFunc: b.getDNSBaseConfig,
@@ -327,6 +331,7 @@ func (a *App) newBackend(dataDir string, appCtx AppContext, store *stateStore,
if err != nil {
return nil, fmt.Errorf("netstack.Create: %w", err)
}
+ ns.GetTCPHandlerForFlow = b.tsocks.datapathHandler
sys.Set(ns)
ns.ProcessLocalIPs = false // let Android kernel handle it; VpnBuilder sets this up
ns.ProcessSubnets = true // for Android-being-an-exit-node support
diff --git a/libtailscale/interfaces.go b/libtailscale/interfaces.go
index ecebba5b47..7a424aed83 100644
--- a/libtailscale/interfaces.go
+++ b/libtailscale/interfaces.go
@@ -134,6 +134,10 @@ type Application interface {
// on every new ipn.Notify message. The returned NotificationManager
// allows the watcher to stop watching notifications.
WatchNotifications(mask int, cb NotificationCallback) NotificationManager
+
+ // RunTsocksProbe executes the shared tsocks probe path and returns a JSON
+ // result payload for Android-side adb automation.
+ RunTsocksProbe(requestJSON string) (string, error)
}
// FileParts is an array of multiple FileParts.
diff --git a/libtailscale/net.go b/libtailscale/net.go
index 29242d8544..d7659bcd12 100644
--- a/libtailscale/net.go
+++ b/libtailscale/net.go
@@ -124,6 +124,12 @@ func (b *backend) updateTUN(rcfg *router.Config, dcfg *dns.OSConfig) error {
return err
}
}
+ for _, routeTarget := range tsocksInjectedRouteTargets() {
+ if err := builder.AddRoute(routeTarget.String(), 32); err != nil {
+ return err
+ }
+ b.logger.Logf("updateTUN: added tsocks injected route %s/32", routeTarget)
+ }
for _, route := range rcfg.LocalRoutes {
addr := route.Addr()
@@ -180,6 +186,11 @@ func (b *backend) updateTUN(rcfg *router.Config, dcfg *dns.OSConfig) error {
return err
}
b.logger.Logf("updateTUN: created TUN device")
+ if tunDev, err = newStep0Tun(tunDev, b.appCtx, b.tsocks); err != nil {
+ closeFileDescriptor()
+ return err
+ }
+ b.logger.Logf("updateTUN: wrapped TUN device for step0")
b.devices.add(tunDev)
b.logger.Logf("updateTUN: added TUN device")
diff --git a/libtailscale/step0_tun.go b/libtailscale/step0_tun.go
new file mode 100644
index 0000000000..3fe6cf11a0
--- /dev/null
+++ b/libtailscale/step0_tun.go
@@ -0,0 +1,359 @@
+// Copyright (c) Tailscale Inc & AUTHORS
+// SPDX-License-Identifier: BSD-3-Clause
+
+package libtailscale
+
+import (
+ "context"
+ "fmt"
+ "net"
+ "net/netip"
+ "os"
+ "slices"
+ "strings"
+ "sync"
+ "time"
+
+ wtun "github.com/tailscale/wireguard-go/tun"
+ "gvisor.dev/gvisor/pkg/buffer"
+ "gvisor.dev/gvisor/pkg/tcpip"
+ "gvisor.dev/gvisor/pkg/tcpip/adapters/gonet"
+ "gvisor.dev/gvisor/pkg/tcpip/header"
+ "gvisor.dev/gvisor/pkg/tcpip/link/channel"
+ "gvisor.dev/gvisor/pkg/tcpip/network/ipv4"
+ "gvisor.dev/gvisor/pkg/tcpip/stack"
+ "gvisor.dev/gvisor/pkg/tcpip/transport/tcp"
+)
+
+type step0Tun struct {
+ raw wtun.Device
+ appCtx AppContext
+ tsocks *tsocksController
+
+ ep *channel.Endpoint
+ stack *stack.Stack
+ ctx context.Context
+ cancel context.CancelFunc
+ lns []*gonet.TCPListener
+
+ mu sync.Mutex
+ seenFlows map[string]bool
+ seenPayloads map[string]bool
+ seenRoutes map[string]bool
+ seenEvents map[string]bool
+ closed bool
+}
+
+func newStep0Tun(raw wtun.Device, appCtx AppContext, tsocks *tsocksController) (wtun.Device, error) {
+ mtu, err := raw.MTU()
+ if err != nil {
+ return nil, err
+ }
+ ctx, cancel := context.WithCancel(context.Background())
+ w := &step0Tun{raw: raw, appCtx: appCtx, tsocks: tsocks, ctx: ctx, cancel: cancel, seenFlows: map[string]bool{}, seenPayloads: map[string]bool{}, seenRoutes: map[string]bool{}, seenEvents: map[string]bool{}}
+ if err := w.initProofStack(uint32(mtu)); err != nil {
+ cancel()
+ return nil, err
+ }
+ go w.pumpProofPackets()
+ w.log(tsocksDatapathTag, fmt.Sprintf("event=step0_enabled targets=%s route=%s", tsocksTargetsSummary(tsocksInterceptTargets()), tsocksRouteTailnetSocks))
+ return w, nil
+}
+
+func (w *step0Tun) initProofStack(mtu uint32) error {
+ w.ep = channel.New(1024, mtu, "")
+ w.stack = stack.New(stack.Options{
+ NetworkProtocols: []stack.NetworkProtocolFactory{ipv4.NewProtocol},
+ TransportProtocols: []stack.TransportProtocolFactory{tcp.NewProtocol},
+ HandleLocal: true,
+ })
+ if tcpipErr := w.stack.CreateNIC(1, w.ep); tcpipErr != nil {
+ return fmt.Errorf("CreateNIC: %v", tcpipErr)
+ }
+ for _, addr := range tsocksInjectedRouteTargets() {
+ protoAddr := tcpip.ProtocolAddress{
+ Protocol: ipv4.ProtocolNumber,
+ AddressWithPrefix: tcpip.AddrFromSlice(addr.AsSlice()).WithPrefix(),
+ }
+ if tcpipErr := w.stack.AddProtocolAddress(1, protoAddr, stack.AddressProperties{}); tcpipErr != nil {
+ return fmt.Errorf("AddProtocolAddress %s: %v", addr, tcpipErr)
+ }
+ }
+ w.stack.SetRouteTable([]tcpip.Route{{Destination: header.IPv4EmptySubnet, NIC: 1}})
+ for _, target := range tsocksInterceptTargets() {
+ listener, err := gonet.ListenTCP(w.stack, tcpip.FullAddress{NIC: 1, Addr: tcpip.AddrFromSlice(target.Addr().AsSlice()), Port: target.Port()}, ipv4.ProtocolNumber)
+ if err != nil {
+ return err
+ }
+ w.lns = append(w.lns, listener)
+ go w.serveTargetListener(listener, target)
+ }
+ return nil
+}
+
+func (w *step0Tun) serveTargetListener(listener *gonet.TCPListener, target netip.AddrPort) {
+ for {
+ conn, err := listener.Accept()
+ if err != nil {
+ w.mu.Lock()
+ closed := w.closed
+ w.mu.Unlock()
+ if !closed {
+ w.log(tsocksDatapathTag, fmt.Sprintf("event=listener_accept_fail dst=%s reason=%s", target, sanitizeForLog(err.Error())))
+ }
+ return
+ }
+ src, ok := addrPortFromNetAddr(conn.RemoteAddr())
+ if !ok {
+ src = netip.MustParseAddrPort("0.0.0.0:0")
+ }
+ flowID := tsocksFlowID(src, target)
+ w.log(tsocksDatapathTag, fmt.Sprintf("event=forwarder_accept flow_id=%s src=%s dst=%s", flowID, src, target))
+ go w.serveProofConn(conn, target)
+ }
+}
+
+func (w *step0Tun) serveProofConn(conn net.Conn, target netip.AddrPort) {
+ defer conn.Close()
+ decision := matchTSocksRule(target)
+ src, ok := addrPortFromNetAddr(conn.RemoteAddr())
+ if !ok {
+ src = netip.MustParseAddrPort("0.0.0.0:0")
+ }
+ flowID := tsocksFlowID(src, target)
+ w.tsocks.logTerminatorAttach(flowID, src, target, decision, "gvisor_listener_accept")
+ w.log(tsocksDatapathTag, fmt.Sprintf("event=endpoint_created flow_id=%s src=%s dst=%s matchedRule=%s selectedRoute=%s injectedRoute=%t", flowID, src, target, decision.MatchedRule, decision.Route, decision.InjectedRouteApplied))
+ ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
+ defer cancel()
+ backend, err := w.tsocks.dialViaSocks(ctx, flowID, target.Addr().String(), int(target.Port()), "datapath", target.String())
+ if err != nil {
+ w.log(tsocksDatapathTag, fmt.Sprintf("event=target_connect_fail flow_id=%s src=%s dst=%s matchedRule=%s selectedRoute=%s injectedRoute=%t reason=%s", flowID, src, target, decision.MatchedRule, decision.Route, decision.InjectedRouteApplied, sanitizeForLog(err.Error())))
+ return
+ }
+ defer backend.Close()
+ w.log(tsocksDatapathTag, fmt.Sprintf("event=target_connect_success flow_id=%s src=%s dst=%s matchedRule=%s selectedRoute=%s injectedRoute=%t", flowID, src, target, decision.MatchedRule, decision.Route, decision.InjectedRouteApplied))
+ w.tsocks.relayStart(flowID, src, target, decision)
+ bytesUp, bytesDown, reason := relayTCP(conn, backend)
+ reason = w.adjustCloseReason(flowID, reason)
+ w.tsocks.relayEnd(flowID, src, target, decision, bytesUp, bytesDown, reason)
+ w.log(tsocksDatapathTag, fmt.Sprintf("event=conn_close flow_id=%s src=%s dst=%s matchedRule=%s selectedRoute=%s injectedRoute=%t bytes_up=%d bytes_down=%d closeReason=%s", flowID, src, target, decision.MatchedRule, decision.Route, decision.InjectedRouteApplied, bytesUp, bytesDown, sanitizeForLog(reason)))
+}
+
+func (w *step0Tun) pumpProofPackets() {
+ for {
+ pkt := w.ep.ReadContext(w.ctx)
+ if pkt == nil {
+ return
+ }
+ view := pkt.ToView()
+ packet := append([]byte(nil), view.AsSlice()...)
+ pkt.DecRef()
+ w.logOutboundTCP(packet)
+ if _, err := w.raw.Write([][]byte{packet}, 0); err != nil {
+ w.log(tsocksDatapathTag, fmt.Sprintf("event=raw_write_fail reason=%s", sanitizeForLog(err.Error())))
+ return
+ }
+ }
+}
+
+func (w *step0Tun) logOutboundTCP(packet []byte) {
+ if len(packet) < header.IPv4MinimumSize {
+ return
+ }
+ ip := header.IPv4(packet)
+ if !ip.IsValid(len(packet)) || ip.TransportProtocol() != header.TCPProtocolNumber {
+ return
+ }
+ tcpHdr := header.TCP(ip.Payload())
+ flags := tcpHdr.Flags()
+ src := netip.AddrPortFrom(netip.AddrFrom4(ip.SourceAddress().As4()).Unmap(), tcpHdr.SourcePort())
+ dst := netip.AddrPortFrom(netip.AddrFrom4(ip.DestinationAddress().As4()).Unmap(), tcpHdr.DestinationPort())
+ flowID := tsocksFlowID(src, dst)
+ if flags.Contains(header.TCPFlagSyn) && flags.Contains(header.TCPFlagAck) {
+ w.logTCPEventOnce(flowID, "synack_sent", src, dst, "direction=server_to_client")
+ }
+ if flags == header.TCPFlagAck {
+ w.logTCPEventOnce(flowID, "ack_seen", src, dst, "direction=server_to_client")
+ }
+ if flags.Contains(header.TCPFlagFin) && flags.Contains(header.TCPFlagAck) {
+ w.logTCPEventOnce(flowID, "finack_seen", src, dst, "direction=server_to_client")
+ } else if flags.Contains(header.TCPFlagFin) {
+ w.logTCPEventOnce(flowID, "fin_seen", src, dst, "direction=server_to_client")
+ }
+ if flags.Contains(header.TCPFlagRst) {
+ w.logTCPEventOnce(flowID, "rst_seen", src, dst, "direction=server_to_client")
+ }
+}
+
+func (w *step0Tun) File() *os.File { return w.raw.File() }
+
+func (w *step0Tun) Read(bufs [][]byte, sizes []int, offset int) (int, error) {
+ for {
+ n, err := w.raw.Read(bufs, sizes, offset)
+ if err != nil {
+ return n, err
+ }
+ out := 0
+ for i := 0; i < n; i++ {
+ packet := bufs[i][offset : offset+sizes[i]]
+ if w.shouldIntercept(packet) {
+ w.injectProofPacket(packet)
+ continue
+ }
+ if out != i {
+ copy(bufs[out][offset:], packet)
+ sizes[out] = sizes[i]
+ }
+ out++
+ }
+ if out > 0 {
+ return out, nil
+ }
+ }
+}
+
+func (w *step0Tun) shouldIntercept(packet []byte) bool {
+ if len(packet) < header.IPv4MinimumSize {
+ return false
+ }
+ ip := header.IPv4(packet)
+ if !ip.IsValid(len(packet)) || ip.TransportProtocol() != header.TCPProtocolNumber {
+ return false
+ }
+ src := netip.AddrFrom4(ip.SourceAddress().As4()).Unmap()
+ dst := netip.AddrFrom4(ip.DestinationAddress().As4()).Unmap()
+ tcpHdr := header.TCP(ip.Payload())
+ dstPort := tcpHdr.DestinationPort()
+ target := netip.AddrPortFrom(dst, dstPort)
+ decision := w.tsocks.routeForDatapath(target)
+ flowID := tsocksFlowID(netip.AddrPortFrom(src, tcpHdr.SourcePort()), target)
+ flags := tcpHdr.Flags()
+ if flags.Contains(header.TCPFlagSyn) && !flags.Contains(header.TCPFlagAck) {
+ offloadState := tsocksDecisionOffloadState(decision, target)
+ w.mu.Lock()
+ key := flowID
+ firstRoute := !w.seenRoutes[key]
+ if firstRoute {
+ w.seenRoutes[key] = true
+ }
+ first := !w.seenFlows[key] && decision.Route == tsocksRouteTailnetSocks
+ if first {
+ w.seenFlows[key] = true
+ }
+ w.mu.Unlock()
+ if firstRoute {
+ line := fmt.Sprintf("event=route_decision flow_id=%s src=%s:%d dst=%s:%d protocol=tcp matchedRule=%s selectedRoute=%s injectedRoute=%t entered_tun_due_to_/32=%t offloadDecision=%s offloadReason=%s recursionGuard=%t", flowID, src, tcpHdr.SourcePort(), dst, dstPort, decision.MatchedRule, decision.Route, decision.InjectedRouteApplied, decision.InjectedRouteApplied, offloadState.Decision, offloadState.Reason, tsocksDecisionRecursionGuard(decision))
+ if decision.InjectedRouteApplied && decision.Route == tsocksRouteDirect {
+ line += " expectedBehavior=true note=entered_tun_due_to_/32_is_expected_not_bug"
+ }
+ w.log(tsocksDatapathTag, line)
+ }
+ if first {
+ w.log(tsocksDatapathTag, fmt.Sprintf("event=flow_identified flow_id=%s src=%s:%d dst=%s:%d protocol=tcp matchedRule=%s selectedRoute=%s injectedRoute=%t offloadDecision=%s offloadReason=%s recursionGuard=%t", flowID, src, tcpHdr.SourcePort(), dst, dstPort, decision.MatchedRule, decision.Route, decision.InjectedRouteApplied, offloadState.Decision, offloadState.Reason, tsocksDecisionRecursionGuard(decision)))
+ w.logTCPEventOnce(flowID, "syn_received", netip.AddrPortFrom(src, tcpHdr.SourcePort()), target, "direction=client_to_server")
+ }
+ }
+ if flags == header.TCPFlagAck {
+ w.logTCPEventOnce(flowID, "ack_seen", netip.AddrPortFrom(src, tcpHdr.SourcePort()), target, "direction=client_to_server")
+ }
+ if flags.Contains(header.TCPFlagFin) && flags.Contains(header.TCPFlagAck) {
+ w.logTCPEventOnce(flowID, "finack_seen", netip.AddrPortFrom(src, tcpHdr.SourcePort()), target, "direction=client_to_server")
+ } else if flags.Contains(header.TCPFlagFin) {
+ w.logTCPEventOnce(flowID, "fin_seen", netip.AddrPortFrom(src, tcpHdr.SourcePort()), target, "direction=client_to_server")
+ }
+ if flags.Contains(header.TCPFlagRst) {
+ w.logTCPEventOnce(flowID, "rst_seen", netip.AddrPortFrom(src, tcpHdr.SourcePort()), target, "direction=client_to_server")
+ }
+ if decision.Route != tsocksRouteTailnetSocks || !slices.Contains(tsocksInterceptTargets(), target) {
+ return false
+ }
+ if len(tcpHdr.Payload()) > 0 {
+ w.mu.Lock()
+ firstData := !w.seenPayloads[flowID]
+ if firstData {
+ w.seenPayloads[flowID] = true
+ }
+ w.mu.Unlock()
+ if firstData {
+ w.log(tsocksDatapathTag, fmt.Sprintf("event=payload_seen flow_id=%s src=%s:%d dst=%s:%d bytes=%d", flowID, src, tcpHdr.SourcePort(), dst, dstPort, len(tcpHdr.Payload())))
+ }
+ }
+ return true
+}
+
+func (w *step0Tun) injectProofPacket(packet []byte) {
+ pkb := stack.NewPacketBuffer(stack.PacketBufferOptions{Payload: buffer.MakeWithData(append([]byte(nil), packet...))})
+ w.ep.InjectInbound(header.IPv4ProtocolNumber, pkb)
+}
+
+func (w *step0Tun) Write(bufs [][]byte, offset int) (int, error) { return w.raw.Write(bufs, offset) }
+func (w *step0Tun) MTU() (int, error) { return w.raw.MTU() }
+func (w *step0Tun) Name() (string, error) { return w.raw.Name() }
+func (w *step0Tun) Events() <-chan wtun.Event { return w.raw.Events() }
+func (w *step0Tun) BatchSize() int { return w.raw.BatchSize() }
+
+func (w *step0Tun) Close() error {
+ w.mu.Lock()
+ w.closed = true
+ w.mu.Unlock()
+ w.cancel()
+ for _, ln := range w.lns {
+ _ = ln.Close()
+ }
+ if w.ep != nil {
+ w.ep.Close()
+ }
+ return w.raw.Close()
+}
+
+func addrPortFromNetAddr(addr net.Addr) (netip.AddrPort, bool) {
+ if addr == nil {
+ return netip.AddrPort{}, false
+ }
+ parsed, err := netip.ParseAddrPort(addr.String())
+ if err != nil {
+ return netip.AddrPort{}, false
+ }
+ return parsed, true
+}
+
+func (w *step0Tun) log(tag, line string) {
+ if w.appCtx != nil {
+ w.appCtx.Log(tag, line)
+ }
+}
+
+func (w *step0Tun) logTCPEventOnce(flowID, event string, src, dst netip.AddrPort, extra string) {
+ key := flowID + ":" + event + ":" + extra
+ w.mu.Lock()
+ if w.seenEvents[key] {
+ w.mu.Unlock()
+ return
+ }
+ w.seenEvents[key] = true
+ w.mu.Unlock()
+ line := fmt.Sprintf("event=%s flow_id=%s src=%s dst=%s", event, flowID, src, dst)
+ if extra != "" {
+ line += " " + extra
+ }
+ w.log(tsocksDatapathTag, line)
+}
+
+func (w *step0Tun) adjustCloseReason(flowID, reason string) string {
+ if strings.HasSuffix(reason, "_rst") {
+ return reason
+ }
+ if w.hasSeenEvent(flowID + ":rst_seen:direction=client_to_server") {
+ return "client_rst"
+ }
+ if w.hasSeenEvent(flowID + ":rst_seen:direction=server_to_client") {
+ return "server_rst"
+ }
+ return reason
+}
+
+func (w *step0Tun) hasSeenEvent(key string) bool {
+ w.mu.Lock()
+ defer w.mu.Unlock()
+ return w.seenEvents[key]
+}
diff --git a/libtailscale/tsocks.go b/libtailscale/tsocks.go
new file mode 100644
index 0000000000..451e6cb443
--- /dev/null
+++ b/libtailscale/tsocks.go
@@ -0,0 +1,605 @@
+// Copyright (c) Tailscale Inc & AUTHORS
+// SPDX-License-Identifier: BSD-3-Clause
+
+package libtailscale
+
+import (
+ "context"
+ "encoding/json"
+ "errors"
+ "fmt"
+ "io"
+ "net"
+ "net/netip"
+ "strconv"
+ "strings"
+ "time"
+
+ "tailscale.com/net/tsdial"
+)
+
+const (
+ tsocksTestTag = "TSOCKS_TEST"
+ tsocksRouteTag = "TSOCKS_ROUTE"
+ tsocksSocksTag = "TSOCKS_SOCKS"
+ tsocksDatapathTag = "TSOCKS_DATAPATH"
+
+ tsocksLANHost = "192.168.31.101"
+ tsocksTailnetLabHost = "100.109.193.113"
+ tsocksTailnetDomainHost = "wide-ts-wu"
+ tsocksServerHost = "100.78.63.77"
+ tsocksServerPort = 1080
+ tsocksPublicHost = "example.com"
+ tsocksPublicPort = 80
+
+ tsocksProbeTimeoutDefault = 5000
+ tsocksMaxTimeoutMs = 10000
+)
+
+type tsocksRoute string
+
+const (
+ tsocksRouteDirect tsocksRoute = "DIRECT"
+ tsocksRouteTailscaleNormal tsocksRoute = "TAILSCALE_NORMAL"
+ tsocksRouteTailnetSocks tsocksRoute = "TAILNET_SOCKS"
+)
+
+type tsocksRouteDecision struct {
+ Route tsocksRoute `json:"route"`
+ MatchedRule string `json:"matchedRule"`
+ InjectedRouteApplied bool `json:"injectedRouteApplied"`
+}
+
+type tsocksProbeRequest struct {
+ Scenario string `json:"scenario"`
+ RequestID string `json:"requestId"`
+ Host string `json:"host"`
+ Port int `json:"port"`
+ Protocol string `json:"protocol"`
+ Path string `json:"path"`
+ Payload string `json:"payload"`
+ HostHeader string `json:"hostHeader"`
+ TimeoutMs int `json:"timeoutMs"`
+ SocksEnabled bool `json:"socksEnabled"`
+ PreviewOnly bool `json:"previewOnly"`
+}
+
+type tsocksProbeResult struct {
+ Route string `json:"route"`
+ MatchedRule string `json:"matchedRule"`
+ BytesSent int `json:"bytesSent"`
+ BytesReceived int `json:"bytesReceived"`
+ Detail string `json:"detail"`
+ InjectedRoute bool `json:"injectedRouteApplied"`
+}
+
+type tsocksController struct {
+ appCtx AppContext
+ dialer *tsdial.Dialer
+ activeRelays int64
+}
+
+func newTSocksController(appCtx AppContext, dialer *tsdial.Dialer) *tsocksController {
+ return &tsocksController{appCtx: appCtx, dialer: dialer}
+}
+
+func (a *App) RunTsocksProbe(requestJSON string) (string, error) {
+ a.ready.Wait()
+ if a.tsocks == nil {
+ return "", errors.New("tsocks_not_ready")
+ }
+ result, err := a.tsocks.runProbe(requestJSON)
+ if err != nil {
+ return "", err
+ }
+ b, err := json.Marshal(result)
+ if err != nil {
+ return "", err
+ }
+ return string(b), nil
+}
+
+func (c *tsocksController) runProbe(requestJSON string) (*tsocksProbeResult, error) {
+ var req tsocksProbeRequest
+ if err := json.Unmarshal([]byte(requestJSON), &req); err != nil {
+ return nil, err
+ }
+ if req.Scenario == "" {
+ req.Scenario = "unspecified"
+ }
+ if req.RequestID == "" {
+ req.RequestID = fmt.Sprintf("req-%d", time.Now().UnixMilli())
+ }
+ if req.Protocol == "" {
+ req.Protocol = "tcp"
+ }
+ if req.TimeoutMs == 0 {
+ req.TimeoutMs = tsocksProbeTimeoutDefault
+ }
+ if err := c.validateProbeRequest(req); err != nil {
+ c.log(tsocksTestTag, fmt.Sprintf("event=TEST_FAIL requestId=%s scenario=%s route=UNKNOWN reason=%s", req.RequestID, req.Scenario, sanitizeForLog(err.Error())))
+ return nil, err
+ }
+ c.log(tsocksTestTag, fmt.Sprintf("event=request_start requestId=%s scenario=%s protocol=%s host=%s port=%d timeoutMs=%d socksEnabled=%t", req.RequestID, req.Scenario, req.Protocol, req.Host, req.Port, req.TimeoutMs, req.SocksEnabled))
+ decision := c.routeForProbe(req)
+ probeTarget := net.JoinHostPort(req.Host, strconv.Itoa(req.Port))
+ offloadState := tsocksOffloadState{Decision: "bypass", Reason: "BASELINE_NATIVE_PATH_OK"}
+ if addr, err := netip.ParseAddr(req.Host); err == nil && addr.Is4() {
+ offloadState = tsocksDecisionOffloadState(decision, netip.AddrPortFrom(addr.Unmap(), uint16(req.Port)))
+ }
+ c.log(tsocksRouteTag, fmt.Sprintf("event=route_decision requestId=%s target=%s matchedRule=%s selectedRoute=%s injectedRoute=%t entered_tun_due_to_/32=%t offloadDecision=%s offloadReason=%s recursionGuard=%t", req.RequestID, probeTarget, decision.MatchedRule, decision.Route, decision.InjectedRouteApplied, decision.InjectedRouteApplied, offloadState.Decision, offloadState.Reason, tsocksDecisionRecursionGuard(decision)))
+ if req.PreviewOnly {
+ result := &tsocksProbeResult{
+ Route: string(decision.Route),
+ MatchedRule: decision.MatchedRule,
+ Detail: "preview_only",
+ InjectedRoute: decision.InjectedRouteApplied,
+ }
+ c.log(tsocksTestTag, fmt.Sprintf("event=TEST_PASS requestId=%s scenario=%s route=%s protocol=%s bytesSent=0 bytesReceived=0 detail=%s", req.RequestID, req.Scenario, decision.Route, req.Protocol, result.Detail))
+ return result, nil
+ }
+ ctx, cancel := context.WithTimeout(context.Background(), time.Duration(req.TimeoutMs)*time.Millisecond)
+ defer cancel()
+ conn, err := c.openProbeConn(ctx, req, decision)
+ if err != nil {
+ c.log(tsocksTestTag, fmt.Sprintf("event=TEST_FAIL requestId=%s scenario=%s route=%s reason=%s", req.RequestID, req.Scenario, decision.Route, sanitizeForLog(err.Error())))
+ return nil, err
+ }
+ defer conn.Close()
+ result, err := c.executeProbe(conn, req, decision)
+ if err != nil {
+ c.log(tsocksTestTag, fmt.Sprintf("event=TEST_FAIL requestId=%s scenario=%s route=%s reason=%s", req.RequestID, req.Scenario, decision.Route, sanitizeForLog(err.Error())))
+ return nil, err
+ }
+ result.Route = string(decision.Route)
+ result.MatchedRule = decision.MatchedRule
+ result.InjectedRoute = decision.InjectedRouteApplied
+ c.log(tsocksTestTag, fmt.Sprintf("event=TEST_PASS requestId=%s scenario=%s route=%s protocol=%s bytesSent=%d bytesReceived=%d detail=%s", req.RequestID, req.Scenario, decision.Route, req.Protocol, result.BytesSent, result.BytesReceived, sanitizeForLog(result.Detail)))
+ return result, nil
+}
+
+func (c *tsocksController) datapathHandler(src, dst netip.AddrPort) (func(net.Conn), bool) {
+ decision := c.routeForDatapath(dst)
+ flowID := tsocksFlowID(src, dst)
+ offloadState := tsocksDecisionOffloadState(decision, dst)
+ c.log(tsocksDatapathTag, fmt.Sprintf("event=route_decision flow=datapath flow_id=%s src=%s dst=%s matchedRule=%s selectedRoute=%s injectedRoute=%t entered_tun_due_to_/32=%t offloadDecision=%s offloadReason=%s recursionGuard=%t", flowID, src, dst, decision.MatchedRule, decision.Route, decision.InjectedRouteApplied, decision.InjectedRouteApplied, offloadState.Decision, offloadState.Reason, tsocksDecisionRecursionGuard(decision)))
+ if decision.Route != tsocksRouteTailnetSocks {
+ return nil, false
+ }
+ return func(conn net.Conn) {
+ c.handleDatapathConn(src, dst, conn, decision)
+ }, true
+}
+
+func (c *tsocksController) handleDatapathConn(src, dst netip.AddrPort, client net.Conn, decision tsocksRouteDecision) {
+ defer client.Close()
+ ctx, cancel := context.WithTimeout(context.Background(), tsocksMaxTimeoutMs*time.Millisecond)
+ defer cancel()
+ flowID := tsocksFlowID(src, dst)
+ c.logTerminatorAttach(flowID, src, dst, decision, "netstack_handler")
+ backend, err := c.dialViaSocks(ctx, flowID, dst.Addr().String(), int(dst.Port()), "datapath", dst.String())
+ if err != nil {
+ c.log(tsocksDatapathTag, fmt.Sprintf("event=target_connect_fail flow=datapath flow_id=%s src=%s dst=%s matchedRule=%s selectedRoute=%s injectedRoute=%t reason=%s", flowID, src, dst, decision.MatchedRule, decision.Route, decision.InjectedRouteApplied, sanitizeForLog(err.Error())))
+ return
+ }
+ defer backend.Close()
+ c.log(tsocksDatapathTag, fmt.Sprintf("event=target_connect_success flow=datapath flow_id=%s src=%s dst=%s matchedRule=%s selectedRoute=%s injectedRoute=%t", flowID, src, dst, decision.MatchedRule, decision.Route, decision.InjectedRouteApplied))
+ c.relayStart(flowID, src, dst, decision)
+ bytesUp, bytesDown, reason := relayTCP(client, backend)
+ c.relayEnd(flowID, src, dst, decision, bytesUp, bytesDown, reason)
+ c.log(tsocksDatapathTag, fmt.Sprintf("event=conn_close flow=datapath flow_id=%s src=%s dst=%s matchedRule=%s selectedRoute=%s injectedRoute=%t bytes_up=%d bytes_down=%d closeReason=%s", flowID, src, dst, decision.MatchedRule, decision.Route, decision.InjectedRouteApplied, bytesUp, bytesDown, sanitizeForLog(reason)))
+}
+
+func (c *tsocksController) validateProbeRequest(req tsocksProbeRequest) error {
+ if strings.TrimSpace(req.Host) == "" {
+ return errors.New("missing_host")
+ }
+ if req.Port < 1 || req.Port > 65535 {
+ return errors.New("invalid_port")
+ }
+ if req.Protocol != "tcp" && req.Protocol != "http" {
+ return errors.New("invalid_protocol")
+ }
+ if req.TimeoutMs <= 0 {
+ return errors.New("invalid_timeout")
+ }
+ if req.TimeoutMs > tsocksMaxTimeoutMs {
+ return errors.New("timeout_too_large")
+ }
+ return nil
+}
+
+func (c *tsocksController) routeForProbe(req tsocksProbeRequest) tsocksRouteDecision {
+ switch req.Scenario {
+ case "lan-http", "lan-tcp", "lan-tcp-close", "lan-tcp-rst":
+ return tsocksRouteDecision{Route: tsocksRouteDirect, MatchedRule: "lan_baseline", InjectedRouteApplied: false}
+ case "tailnet-http", "tailnet-tcp", "tailnet-tcp-close", "tailnet-tcp-rst":
+ return tsocksRouteDecision{Route: tsocksRouteTailscaleNormal, MatchedRule: "tailnet_lab_baseline", InjectedRouteApplied: false}
+ }
+ host := strings.ToLower(strings.TrimSpace(req.Host))
+ if addr, err := netip.ParseAddr(host); err == nil && addr.Is4() {
+ decision := matchTSocksRule(netip.AddrPortFrom(addr.Unmap(), uint16(req.Port)))
+ if !req.SocksEnabled && decision.Route == tsocksRouteTailnetSocks {
+ return tsocksRouteDecision{Route: tsocksRouteDirect, MatchedRule: "socks_disabled", InjectedRouteApplied: decision.InjectedRouteApplied}
+ }
+ return decision
+ }
+ switch {
+ case host == strings.ToLower(tsocksLANHost):
+ return tsocksRouteDecision{Route: tsocksRouteDirect, MatchedRule: "lan_baseline", InjectedRouteApplied: false}
+ case host == strings.ToLower(tsocksTailnetLabHost):
+ return tsocksRouteDecision{Route: tsocksRouteTailscaleNormal, MatchedRule: "tailnet_lab_baseline", InjectedRouteApplied: false}
+ case host == strings.ToLower(tsocksTailnetDomainHost):
+ return tsocksRouteDecision{Route: tsocksRouteTailscaleNormal, MatchedRule: "tailnet_domain_baseline", InjectedRouteApplied: false}
+ case host == strings.ToLower(tsocksServerHost) && req.Port == tsocksServerPort:
+ return tsocksRouteDecision{Route: tsocksRouteDirect, MatchedRule: "socks_server_self", InjectedRouteApplied: false}
+ case !req.SocksEnabled:
+ return tsocksRouteDecision{Route: tsocksRouteDirect, MatchedRule: "socks_disabled", InjectedRouteApplied: false}
+ case host == strings.ToLower(tsocksPublicHost) && req.Port == tsocksPublicPort:
+ return tsocksRouteDecision{Route: tsocksRouteTailnetSocks, MatchedRule: "public_allowlist_example_com_80", InjectedRouteApplied: false}
+ default:
+ return tsocksRouteDecision{Route: tsocksRouteDirect, MatchedRule: "default_direct", InjectedRouteApplied: false}
+ }
+}
+
+func (c *tsocksController) routeForDatapath(dst netip.AddrPort) tsocksRouteDecision {
+ return matchTSocksRule(dst)
+}
+
+func (c *tsocksController) openProbeConn(ctx context.Context, req tsocksProbeRequest, decision tsocksRouteDecision) (net.Conn, error) {
+ targetAddr := net.JoinHostPort(req.Host, strconv.Itoa(req.Port))
+ switch decision.Route {
+ case tsocksRouteDirect, tsocksRouteTailscaleNormal:
+ conn, err := c.dialer.UserDial(ctx, "tcp", targetAddr)
+ if err != nil {
+ c.log(tsocksTestTag, fmt.Sprintf("event=target_connect_fail requestId=%s route=%s host=%s port=%d reason=%s", req.RequestID, decision.Route, req.Host, req.Port, sanitizeForLog(err.Error())))
+ return nil, err
+ }
+ c.log(tsocksTestTag, fmt.Sprintf("event=target_connect_success requestId=%s route=%s host=%s port=%d", req.RequestID, decision.Route, req.Host, req.Port))
+ return conn, nil
+ case tsocksRouteTailnetSocks:
+ return c.dialViaSocks(ctx, req.RequestID, req.Host, req.Port, "probe", targetAddr)
+ default:
+ return nil, errors.New("unsupported_route")
+ }
+}
+
+func (c *tsocksController) executeProbe(conn net.Conn, req tsocksProbeRequest, decision tsocksRouteDecision) (*tsocksProbeResult, error) {
+ _ = conn.SetDeadline(time.Now().Add(time.Duration(req.TimeoutMs) * time.Millisecond))
+ switch req.Protocol {
+ case "http":
+ return c.probeHTTP(conn, req, decision)
+ case "tcp":
+ return c.probeTCP(conn, req, decision)
+ default:
+ return nil, errors.New("unsupported_protocol")
+ }
+}
+
+func (c *tsocksController) probeHTTP(conn net.Conn, req tsocksProbeRequest, decision tsocksRouteDecision) (*tsocksProbeResult, error) {
+ method := "GET"
+ bodyBytes := []byte(req.Payload)
+ if len(bodyBytes) > 0 {
+ method = "POST"
+ }
+ path := strings.TrimSpace(req.Path)
+ if path == "" {
+ path = "/"
+ }
+ if !strings.HasPrefix(path, "/") {
+ path = "/" + path
+ }
+ headers := fmt.Sprintf("%s %s HTTP/1.1\r\nHost: %s\r\nConnection: close\r\nUser-Agent: tailscale-android-tsocks-test\r\n", method, path, req.Host)
+ hostHeader := strings.TrimSpace(req.HostHeader)
+ if hostHeader == "" {
+ hostHeader = req.Host
+ }
+ headers = fmt.Sprintf("%s %s HTTP/1.1\r\nHost: %s\r\nConnection: close\r\nUser-Agent: tailscale-android-tsocks-test\r\n", method, path, hostHeader)
+ if len(bodyBytes) > 0 {
+ headers += fmt.Sprintf("Content-Type: text/plain; charset=utf-8\r\nContent-Length: %d\r\n", len(bodyBytes))
+ }
+ headers += "\r\n"
+ if _, err := conn.Write([]byte(headers)); err != nil {
+ return nil, err
+ }
+ bytesSent := len(headers)
+ if len(bodyBytes) > 0 {
+ if _, err := conn.Write(bodyBytes); err != nil {
+ return nil, err
+ }
+ bytesSent += len(bodyBytes)
+ }
+ responseBytes, err := io.ReadAll(io.LimitReader(conn, 8*1024))
+ if err != nil {
+ return nil, err
+ }
+ if len(responseBytes) == 0 {
+ return nil, errors.New("http_empty_response")
+ }
+ statusLine := strings.TrimSpace(strings.SplitN(string(responseBytes), "\n", 2)[0])
+ if !strings.HasPrefix(statusLine, "HTTP/1.") {
+ return nil, errors.New("http_bad_status_line")
+ }
+ parts := strings.Split(statusLine, " ")
+ if len(parts) < 2 {
+ return nil, errors.New("http_bad_status_line")
+ }
+ code, err := strconv.Atoi(parts[1])
+ if err != nil {
+ return nil, err
+ }
+ if code < 200 || code > 399 {
+ return nil, fmt.Errorf("http_status_%d", code)
+ }
+ c.log(tsocksTestTag, fmt.Sprintf("event=http_result requestId=%s route=%s statusLine=%s bytesSent=%d bytesReceived=%d", req.RequestID, decision.Route, sanitizeForLog(statusLine), bytesSent, len(responseBytes)))
+ return &tsocksProbeResult{BytesSent: bytesSent, BytesReceived: len(responseBytes), Detail: statusLine}, nil
+}
+
+func (c *tsocksController) probeTCP(conn net.Conn, req tsocksProbeRequest, decision tsocksRouteDecision) (*tsocksProbeResult, error) {
+ payload := req.Payload
+ if payload == "" {
+ payload = fmt.Sprintf("tailscale-tsocks-test requestId=%s scenario=%s\n", req.RequestID, req.Scenario)
+ }
+ trimmedPayload := strings.TrimSpace(payload)
+ expectsPong := strings.EqualFold(trimmedPayload, "PING")
+ if expectsPong {
+ payload = "PING\n"
+ } else if strings.EqualFold(trimmedPayload, "CLOSE") || strings.EqualFold(trimmedPayload, "RST") || strings.HasPrefix(strings.ToUpper(trimmedPayload), "STREAM") {
+ payload = trimmedPayload + "\n"
+ }
+ if _, err := io.WriteString(conn, payload); err != nil {
+ return nil, err
+ }
+ var responseBytes []byte
+ var err error
+ if expectsPong {
+ responseBytes, err = readUntil(conn, "PONG")
+ } else {
+ responseBytes, err = io.ReadAll(io.LimitReader(conn, 8*1024))
+ }
+ if err != nil {
+ return nil, err
+ }
+ if len(responseBytes) == 0 {
+ return nil, errors.New("tcp_empty_response")
+ }
+ responseText := strings.TrimSpace(string(responseBytes))
+ if expectsPong && !strings.Contains(responseText, "PONG") {
+ return nil, errors.New("tcp_missing_pong")
+ }
+ c.log(tsocksTestTag, fmt.Sprintf("event=tcp_result requestId=%s route=%s bytesSent=%d bytesReceived=%d response=%s", req.RequestID, decision.Route, len(payload), len(responseBytes), sanitizeForLog(responseText)))
+ return &tsocksProbeResult{BytesSent: len(payload), BytesReceived: len(responseBytes), Detail: "tcp_response_received"}, nil
+}
+
+func (c *tsocksController) dialViaSocks(ctx context.Context, requestID, targetHost string, targetPort int, flowType, target string) (net.Conn, error) {
+ conn, err := c.dialer.UserDial(ctx, "tcp", net.JoinHostPort(tsocksServerHost, strconv.Itoa(tsocksServerPort)))
+ if err != nil {
+ c.logSocksConnectEvent(requestID, target, "server_connect_fail", targetHost, targetPort, err)
+ c.log(tsocksSocksTag, fmt.Sprintf("event=socks_connect_fail flow=%s requestId=%s target=%s targetHost=%s targetPort=%d reason=%s", flowType, requestID, target, targetHost, targetPort, sanitizeForLog(err.Error())))
+ return nil, err
+ }
+ if deadline, ok := ctx.Deadline(); ok {
+ _ = conn.SetDeadline(deadline)
+ }
+ c.logSocksConnectEvent(requestID, target, "server_connect_success", targetHost, targetPort, nil)
+ c.log(tsocksSocksTag, fmt.Sprintf("event=socks_server_connect_success flow=%s requestId=%s target=%s socksHost=%s socksPort=%d", flowType, requestID, target, tsocksServerHost, tsocksServerPort))
+ if err := socksConnect(conn, targetHost, targetPort); err != nil {
+ _ = conn.Close()
+ c.logSocksConnectEvent(requestID, target, "connect_fail", targetHost, targetPort, err)
+ c.log(tsocksSocksTag, fmt.Sprintf("event=socks_connect_fail flow=%s requestId=%s target=%s targetHost=%s targetPort=%d reason=%s", flowType, requestID, target, targetHost, targetPort, sanitizeForLog(err.Error())))
+ return nil, err
+ }
+ _ = conn.SetDeadline(time.Time{})
+ c.logSocksConnectEvent(requestID, target, "connect_success", targetHost, targetPort, nil)
+ c.log(tsocksSocksTag, fmt.Sprintf("event=socks_connect_success flow=%s requestId=%s target=%s targetHost=%s targetPort=%d", flowType, requestID, target, targetHost, targetPort))
+ return conn, nil
+}
+
+func socksConnect(conn net.Conn, host string, port int) error {
+ if _, err := conn.Write([]byte{0x05, 0x01, 0x00}); err != nil {
+ return err
+ }
+ methodResponse := make([]byte, 2)
+ if _, err := io.ReadFull(conn, methodResponse); err != nil {
+ return err
+ }
+ if methodResponse[0] != 0x05 || methodResponse[1] != 0x00 {
+ return fmt.Errorf("socks_method_rejected_%d_%d", methodResponse[0], methodResponse[1])
+ }
+ request, err := buildSocksConnectRequest(host, port)
+ if err != nil {
+ return err
+ }
+ if _, err := conn.Write(request); err != nil {
+ return err
+ }
+ responseHeader := make([]byte, 4)
+ if _, err := io.ReadFull(conn, responseHeader); err != nil {
+ return err
+ }
+ if responseHeader[0] != 0x05 {
+ return fmt.Errorf("socks_bad_version_%d", responseHeader[0])
+ }
+ if responseHeader[1] != 0x00 {
+ return fmt.Errorf("socks_connect_reply_%d", responseHeader[1])
+ }
+ return discardSocksAddress(conn, int(responseHeader[3]))
+}
+
+func buildSocksConnectRequest(host string, port int) ([]byte, error) {
+ b := []byte{0x05, 0x01, 0x00}
+ if addr, err := netip.ParseAddr(host); err == nil {
+ if addr.Is4() {
+ b = append(b, 0x01)
+ b = append(b, addr.AsSlice()...)
+ } else {
+ b = append(b, 0x04)
+ b = append(b, addr.AsSlice()...)
+ }
+ } else {
+ hostBytes := []byte(host)
+ if len(hostBytes) > 255 {
+ return nil, errors.New("host_too_long")
+ }
+ b = append(b, 0x03, byte(len(hostBytes)))
+ b = append(b, hostBytes...)
+ }
+ b = append(b, byte((port>>8)&0xff), byte(port&0xff))
+ return b, nil
+}
+
+func discardSocksAddress(r io.Reader, atyp int) error {
+ var addressLength int
+ switch atyp {
+ case 0x01:
+ addressLength = 4
+ case 0x03:
+ var l [1]byte
+ if _, err := io.ReadFull(r, l[:]); err != nil {
+ return err
+ }
+ addressLength = int(l[0])
+ case 0x04:
+ addressLength = 16
+ default:
+ return fmt.Errorf("socks_unknown_atyp_%d", atyp)
+ }
+ _, err := io.CopyN(io.Discard, r, int64(addressLength+2))
+ return err
+}
+
+type tsocksTCPHalfCloser interface {
+ CloseRead() error
+ CloseWrite() error
+}
+
+type relayCloseKind string
+
+const (
+ relayCloseFIN relayCloseKind = "fin"
+ relayCloseRST relayCloseKind = "rst"
+ relayCloseTimeout relayCloseKind = "timeout"
+ relayCloseOther relayCloseKind = "other"
+)
+
+type relayResult struct {
+ direction string
+ n int64
+ err error
+ kind relayCloseKind
+ completedAt time.Time
+}
+
+func relayTCP(client, backend net.Conn) (int64, int64, string) {
+ results := make(chan relayResult, 2)
+ var clientHalf tsocksTCPHalfCloser
+ if hc, ok := client.(tsocksTCPHalfCloser); ok {
+ clientHalf = hc
+ }
+ var backendHalf tsocksTCPHalfCloser
+ if hc, ok := backend.(tsocksTCPHalfCloser); ok {
+ backendHalf = hc
+ }
+ go func() {
+ n, err := io.Copy(backend, client)
+ results <- relayResult{direction: "client", n: n, err: normalizeRelayErr(err), kind: classifyRelayErr(err), completedAt: time.Now()}
+ if backendHalf != nil {
+ _ = backendHalf.CloseWrite()
+ }
+ if clientHalf != nil {
+ _ = clientHalf.CloseRead()
+ }
+ }()
+ go func() {
+ n, err := io.Copy(client, backend)
+ results <- relayResult{direction: "server", n: n, err: normalizeRelayErr(err), kind: classifyRelayErr(err), completedAt: time.Now()}
+ if clientHalf != nil {
+ _ = clientHalf.CloseWrite()
+ }
+ if backendHalf != nil {
+ _ = backendHalf.CloseRead()
+ }
+ }()
+ var bytesUp, bytesDown int64
+ collected := make([]relayResult, 0, 2)
+ for i := 0; i < 2; i++ {
+ result := <-results
+ if result.direction == "client" {
+ bytesUp = result.n
+ } else {
+ bytesDown = result.n
+ }
+ collected = append(collected, result)
+ }
+ return bytesUp, bytesDown, tsocksCloseReason(collected)
+}
+
+func normalizeRelayErr(err error) error {
+ if err == nil || errors.Is(err, io.EOF) {
+ return nil
+ }
+ if ne, ok := err.(net.Error); ok && ne.Timeout() {
+ return nil
+ }
+ return err
+}
+
+func classifyRelayErr(err error) relayCloseKind {
+ if err == nil || errors.Is(err, io.EOF) {
+ return relayCloseFIN
+ }
+ if ne, ok := err.(net.Error); ok && ne.Timeout() {
+ return relayCloseTimeout
+ }
+ errText := strings.ToLower(err.Error())
+ if strings.Contains(errText, "reset by peer") || strings.Contains(errText, "connection reset") || strings.Contains(errText, "broken pipe") {
+ return relayCloseRST
+ }
+ return relayCloseOther
+}
+
+func readUntil(r io.Reader, marker string) ([]byte, error) {
+ buf := make([]byte, 0, 8*1024)
+ chunk := make([]byte, 1024)
+ for len(buf) < 8*1024 {
+ n, err := r.Read(chunk)
+ if n > 0 {
+ buf = append(buf, chunk[:n]...)
+ if strings.Contains(string(buf), marker) {
+ return buf, nil
+ }
+ }
+ if err != nil {
+ if errors.Is(err, io.EOF) && len(buf) > 0 {
+ return buf, nil
+ }
+ return nil, err
+ }
+ }
+ return buf, nil
+}
+
+func (c *tsocksController) log(tag, line string) {
+ if c.appCtx != nil {
+ c.appCtx.Log(tag, line)
+ }
+}
+
+func sanitizeForLog(value string) string {
+ value = strings.TrimSpace(value)
+ if value == "" {
+ return "empty"
+ }
+ var b strings.Builder
+ for _, r := range value {
+ switch {
+ case (r >= 'a' && r <= 'z') || (r >= 'A' && r <= 'Z') || (r >= '0' && r <= '9') || strings.ContainsRune("_./:=-", r):
+ b.WriteRune(r)
+ case r == ' ' || r == '\n' || r == '\r' || r == '\t':
+ b.WriteByte('_')
+ default:
+ b.WriteByte('-')
+ }
+ }
+ return b.String()
+}
diff --git a/libtailscale/tsocks_observability.go b/libtailscale/tsocks_observability.go
new file mode 100644
index 0000000000..999233f5eb
--- /dev/null
+++ b/libtailscale/tsocks_observability.go
@@ -0,0 +1,87 @@
+// Copyright (c) Tailscale Inc & AUTHORS
+// SPDX-License-Identifier: BSD-3-Clause
+
+package libtailscale
+
+import (
+ "fmt"
+ "net/netip"
+ "os"
+ "runtime"
+ "strings"
+ "sync/atomic"
+)
+
+type tsocksRuntimeSnapshot struct {
+ ActiveRelays int64
+ Goroutines int
+ OpenFDs int
+}
+
+func (c *tsocksController) relayStart(flowID string, src, dst netip.AddrPort, decision tsocksRouteDecision) tsocksRuntimeSnapshot {
+ snapshot := c.snapshotRuntime(atomic.AddInt64(&c.activeRelays, 1))
+ c.log(tsocksDatapathTag, fmt.Sprintf("event=relay_start flow_id=%s src=%s dst=%s matchedRule=%s selectedRoute=%s injectedRoute=%t activeRelays=%d goroutines=%d openFDs=%d", flowID, src, dst, decision.MatchedRule, decision.Route, decision.InjectedRouteApplied, snapshot.ActiveRelays, snapshot.Goroutines, snapshot.OpenFDs))
+ return snapshot
+}
+
+func (c *tsocksController) relayEnd(flowID string, src, dst netip.AddrPort, decision tsocksRouteDecision, bytesUp, bytesDown int64, reason string) tsocksRuntimeSnapshot {
+ snapshot := c.snapshotRuntime(atomic.AddInt64(&c.activeRelays, -1))
+ c.log(tsocksDatapathTag, fmt.Sprintf("event=relay_end flow_id=%s src=%s dst=%s matchedRule=%s selectedRoute=%s injectedRoute=%t bytes_up=%d bytes_down=%d closeReason=%s activeRelays=%d goroutines=%d openFDs=%d", flowID, src, dst, decision.MatchedRule, decision.Route, decision.InjectedRouteApplied, bytesUp, bytesDown, sanitizeForLog(reason), snapshot.ActiveRelays, snapshot.Goroutines, snapshot.OpenFDs))
+ return snapshot
+}
+
+func (c *tsocksController) logTerminatorAttach(flowID string, src, dst netip.AddrPort, decision tsocksRouteDecision, reason string) {
+ c.log(tsocksDatapathTag, fmt.Sprintf("event=terminator_attach flow_id=%s src=%s dst=%s matchedRule=%s selectedRoute=%s injectedRoute=%t reason=%s", flowID, src, dst, decision.MatchedRule, decision.Route, decision.InjectedRouteApplied, sanitizeForLog(reason)))
+}
+
+func (c *tsocksController) logSocksConnectEvent(flowID, target string, stage string, targetHost string, targetPort int, err error) {
+ line := fmt.Sprintf("event=socks_connect flow_id=%s target=%s stage=%s socksHost=%s socksPort=%d targetHost=%s targetPort=%d", flowID, target, stage, tsocksServerHost, tsocksServerPort, targetHost, targetPort)
+ if err != nil {
+ line += fmt.Sprintf(" reason=%s", sanitizeForLog(err.Error()))
+ }
+ c.log(tsocksSocksTag, line)
+}
+
+func (c *tsocksController) snapshotRuntime(activeRelays int64) tsocksRuntimeSnapshot {
+ return tsocksRuntimeSnapshot{
+ ActiveRelays: activeRelays,
+ Goroutines: runtime.NumGoroutine(),
+ OpenFDs: tsocksOpenFDCount(),
+ }
+}
+
+func tsocksOpenFDCount() int {
+ entries, err := os.ReadDir("/proc/self/fd")
+ if err != nil {
+ return -1
+ }
+ return len(entries)
+}
+
+func tsocksCloseReason(results []relayResult) string {
+ for _, result := range results {
+ if result.kind == relayCloseRST {
+ return result.direction + "_rst"
+ }
+ }
+ first := results[0]
+ if len(results) > 1 && results[1].completedAt.Before(first.completedAt) {
+ first = results[1]
+ }
+ if first.kind == relayCloseFIN {
+ return first.direction + "_fin"
+ }
+ if first.kind == relayCloseTimeout {
+ return first.direction + "_timeout"
+ }
+ var parts []string
+ for _, result := range results {
+ if result.kind == relayCloseOther && result.err != nil {
+ parts = append(parts, result.direction+"_"+sanitizeForLog(result.err.Error()))
+ }
+ }
+ if len(parts) == 0 {
+ return "eof"
+ }
+ return strings.Join(parts, ";")
+}
diff --git a/libtailscale/tsocks_rules.go b/libtailscale/tsocks_rules.go
new file mode 100644
index 0000000000..2d7d2a51ac
--- /dev/null
+++ b/libtailscale/tsocks_rules.go
@@ -0,0 +1,156 @@
+// Copyright (c) Tailscale Inc & AUTHORS
+// SPDX-License-Identifier: BSD-3-Clause
+
+package libtailscale
+
+import (
+ "encoding/binary"
+ "fmt"
+ "hash/fnv"
+ "net/netip"
+ "sort"
+ "strings"
+)
+
+type tsocksRule struct {
+ Name string
+ Addr netip.Addr
+ Port uint16
+ AnyPort bool
+ Route tsocksRoute
+}
+
+var tsocksDatapathRules = []tsocksRule{
+ {Name: "socks_server_self", Addr: netip.MustParseAddr(tsocksServerHost), Port: tsocksServerPort, Route: tsocksRouteDirect},
+ {Name: "lan_baseline", Addr: netip.MustParseAddr(tsocksLANHost), AnyPort: true, Route: tsocksRouteDirect},
+ {Name: "tailnet_lab_baseline", Addr: netip.MustParseAddr(tsocksTailnetLabHost), AnyPort: true, Route: tsocksRouteTailscaleNormal},
+ {Name: "public_allowlist_example_com_a_80", Addr: netip.MustParseAddr("104.18.26.120"), Port: 80, Route: tsocksRouteTailnetSocks},
+ {Name: "public_allowlist_example_com_b_80", Addr: netip.MustParseAddr("104.18.27.120"), Port: 80, Route: tsocksRouteTailnetSocks},
+}
+
+func matchTSocksRule(dst netip.AddrPort) tsocksRouteDecision {
+ addr := dst.Addr().Unmap()
+ for _, rule := range tsocksDatapathRules {
+ if addr != rule.Addr {
+ continue
+ }
+ if !rule.AnyPort && dst.Port() != rule.Port {
+ continue
+ }
+ return tsocksRouteDecision{
+ Route: rule.Route,
+ MatchedRule: rule.Name,
+ InjectedRouteApplied: tsocksHasInjectedRoute(addr),
+ }
+ }
+ return tsocksRouteDecision{
+ Route: tsocksRouteDirect,
+ MatchedRule: "default_direct",
+ InjectedRouteApplied: tsocksHasInjectedRoute(addr),
+ }
+}
+
+func tsocksHasInjectedRoute(addr netip.Addr) bool {
+ addr = addr.Unmap()
+ for _, rule := range tsocksDatapathRules {
+ if rule.Route == tsocksRouteTailnetSocks && rule.Addr == addr {
+ return true
+ }
+ }
+ return false
+}
+
+func tsocksInjectedRouteTargets() []netip.Addr {
+ seen := map[netip.Addr]struct{}{}
+ var out []netip.Addr
+ for _, rule := range tsocksDatapathRules {
+ if rule.Route != tsocksRouteTailnetSocks {
+ continue
+ }
+ if _, ok := seen[rule.Addr]; ok {
+ continue
+ }
+ seen[rule.Addr] = struct{}{}
+ out = append(out, rule.Addr)
+ }
+ sort.Slice(out, func(i, j int) bool { return out[i].Less(out[j]) })
+ return out
+}
+
+func tsocksInterceptTargets() []netip.AddrPort {
+ var out []netip.AddrPort
+ for _, rule := range tsocksDatapathRules {
+ if rule.Route != tsocksRouteTailnetSocks || rule.AnyPort {
+ continue
+ }
+ out = append(out, netip.AddrPortFrom(rule.Addr, rule.Port))
+ }
+ sort.Slice(out, func(i, j int) bool {
+ if out[i].Addr() == out[j].Addr() {
+ return out[i].Port() < out[j].Port()
+ }
+ return out[i].Addr().Less(out[j].Addr())
+ })
+ return out
+}
+
+type tsocksOffloadState struct {
+ Decision string
+ Reason string
+}
+
+func tsocksCanonicalFlowEndpoints(src, dst netip.AddrPort) (netip.AddrPort, netip.AddrPort) {
+ src = netip.AddrPortFrom(src.Addr().Unmap(), src.Port())
+ dst = netip.AddrPortFrom(dst.Addr().Unmap(), dst.Port())
+ if tsocksHasInjectedRoute(src.Addr()) && !tsocksHasInjectedRoute(dst.Addr()) {
+ return dst, src
+ }
+ return src, dst
+}
+
+func tsocksFlowID(src, dst netip.AddrPort) string {
+ client, server := tsocksCanonicalFlowEndpoints(src, dst)
+ h := fnv.New64a()
+ _, _ = h.Write([]byte(client.Addr().String()))
+ var ports [4]byte
+ binary.BigEndian.PutUint16(ports[0:2], client.Port())
+ binary.BigEndian.PutUint16(ports[2:4], server.Port())
+ _, _ = h.Write(ports[:])
+ _, _ = h.Write([]byte(server.Addr().String()))
+ _, _ = h.Write([]byte("tcp"))
+ return fmt.Sprintf("%016x", h.Sum64())
+}
+
+func tsocksDecisionOffloadState(decision tsocksRouteDecision, dst netip.AddrPort) tsocksOffloadState {
+ if tsocksDecisionRecursionGuard(decision) {
+ return tsocksOffloadState{Decision: "bypass", Reason: "RECURSION_GUARD_BYPASS"}
+ }
+ if decision.Route == tsocksRouteTailnetSocks && tsocksShouldOffloadTarget(dst) {
+ return tsocksOffloadState{Decision: "offloaded", Reason: "RULE_MATCHED_AND_SOCKS_OFFLOADED"}
+ }
+ if decision.InjectedRouteApplied {
+ return tsocksOffloadState{Decision: "bypass", Reason: "RULE_NOT_MATCHED_BUT_ENTERED_TUN_DUE_TO_/32"}
+ }
+ return tsocksOffloadState{Decision: "bypass", Reason: "BASELINE_NATIVE_PATH_OK"}
+}
+
+func tsocksDecisionRecursionGuard(decision tsocksRouteDecision) bool {
+ return decision.MatchedRule == "socks_server_self"
+}
+
+func tsocksShouldOffloadTarget(dst netip.AddrPort) bool {
+ for _, target := range tsocksInterceptTargets() {
+ if target == dst {
+ return true
+ }
+ }
+ return false
+}
+
+func tsocksTargetsSummary(targets []netip.AddrPort) string {
+ parts := make([]string, 0, len(targets))
+ for _, target := range targets {
+ parts = append(parts, target.String())
+ }
+ return strings.Join(parts, ",")
+}
diff --git a/libtailscale/tsocks_rules_test.go b/libtailscale/tsocks_rules_test.go
new file mode 100644
index 0000000000..2cec408385
--- /dev/null
+++ b/libtailscale/tsocks_rules_test.go
@@ -0,0 +1,91 @@
+// Copyright (c) Tailscale Inc & AUTHORS
+// SPDX-License-Identifier: BSD-3-Clause
+
+package libtailscale
+
+import (
+ "net/netip"
+ "testing"
+)
+
+func TestMatchTSocksRule(t *testing.T) {
+ tests := []struct {
+ name string
+ target string
+ wantRoute tsocksRoute
+ wantRule string
+ wantInjected bool
+ }{
+ {name: "public_a_exact", target: "104.18.26.120:80", wantRoute: tsocksRouteTailnetSocks, wantRule: "public_allowlist_example_com_a_80", wantInjected: true},
+ {name: "public_b_exact", target: "104.18.27.120:80", wantRoute: tsocksRouteTailnetSocks, wantRule: "public_allowlist_example_com_b_80", wantInjected: true},
+ {name: "public_a_wrong_port", target: "104.18.26.120:81", wantRoute: tsocksRouteDirect, wantRule: "default_direct", wantInjected: true},
+ {name: "lan_wildcard", target: "192.168.31.101:19080", wantRoute: tsocksRouteDirect, wantRule: "lan_baseline", wantInjected: false},
+ {name: "tailnet_lab_wildcard", target: "100.109.193.113:443", wantRoute: tsocksRouteTailscaleNormal, wantRule: "tailnet_lab_baseline", wantInjected: false},
+ {name: "socks_self_exact", target: "100.78.63.77:1080", wantRoute: tsocksRouteDirect, wantRule: "socks_server_self", wantInjected: false},
+ {name: "public_no_match", target: "104.18.4.106:80", wantRoute: tsocksRouteDirect, wantRule: "default_direct", wantInjected: false},
+ }
+
+ for _, tt := range tests {
+ t.Run(tt.name, func(t *testing.T) {
+ target := netip.MustParseAddrPort(tt.target)
+ got := matchTSocksRule(target)
+ if got.Route != tt.wantRoute {
+ t.Fatalf("route = %s, want %s", got.Route, tt.wantRoute)
+ }
+ if got.MatchedRule != tt.wantRule {
+ t.Fatalf("matchedRule = %s, want %s", got.MatchedRule, tt.wantRule)
+ }
+ if got.InjectedRouteApplied != tt.wantInjected {
+ t.Fatalf("injectedRoute = %t, want %t", got.InjectedRouteApplied, tt.wantInjected)
+ }
+ })
+ }
+}
+
+func TestTSocksInjectedRouteTargets(t *testing.T) {
+ want := []netip.Addr{
+ netip.MustParseAddr("104.18.26.120"),
+ netip.MustParseAddr("104.18.27.120"),
+ }
+ got := tsocksInjectedRouteTargets()
+ if len(got) != len(want) {
+ t.Fatalf("len(routes) = %d, want %d", len(got), len(want))
+ }
+ for i := range want {
+ if got[i] != want[i] {
+ t.Fatalf("routes[%d] = %s, want %s", i, got[i], want[i])
+ }
+ }
+}
+
+func TestTSocksDecisionOffloadState(t *testing.T) {
+ tests := []struct {
+ name string
+ target string
+ wantDecision string
+ wantReason string
+ }{
+ {name: "allowlist_offloaded", target: "104.18.26.120:80", wantDecision: "offloaded", wantReason: "RULE_MATCHED_AND_SOCKS_OFFLOADED"},
+ {name: "wrong_port_bypass", target: "104.18.26.120:81", wantDecision: "bypass", wantReason: "RULE_NOT_MATCHED_BUT_ENTERED_TUN_DUE_TO_/32"},
+ {name: "recursion_guard_bypass", target: "100.78.63.77:1080", wantDecision: "bypass", wantReason: "RECURSION_GUARD_BYPASS"},
+ }
+ for _, tt := range tests {
+ t.Run(tt.name, func(t *testing.T) {
+ target := netip.MustParseAddrPort(tt.target)
+ got := tsocksDecisionOffloadState(matchTSocksRule(target), target)
+ if got.Decision != tt.wantDecision || got.Reason != tt.wantReason {
+ t.Fatalf("offload = %+v, want decision=%s reason=%s", got, tt.wantDecision, tt.wantReason)
+ }
+ })
+ }
+}
+
+func TestTSocksFlowIDCanonicalAcrossDirections(t *testing.T) {
+ client := netip.MustParseAddrPort("100.113.1.35:34567")
+ server := netip.MustParseAddrPort("104.18.26.120:80")
+ forward := tsocksFlowID(client, server)
+ reverse := tsocksFlowID(server, client)
+ if forward != reverse {
+ t.Fatalf("flow IDs differ: forward=%s reverse=%s", forward, reverse)
+ }
+}
diff --git a/scripts/tsocks-test-build.sh b/scripts/tsocks-test-build.sh
new file mode 100644
index 0000000000..68a4839297
--- /dev/null
+++ b/scripts/tsocks-test-build.sh
@@ -0,0 +1,11 @@
+#!/bin/sh
+#
+# Copyright (c) Tailscale Inc & AUTHORS
+# SPDX-License-Identifier: BSD-3-Clause
+#
+set -eu
+
+repo_root=$(CDPATH= cd -- "$(dirname -- "$0")/.." && pwd)
+
+cd "$repo_root"
+make apk
diff --git a/scripts/tsocks-test-env.sh b/scripts/tsocks-test-env.sh
new file mode 100755
index 0000000000..d0de1c9a92
--- /dev/null
+++ b/scripts/tsocks-test-env.sh
@@ -0,0 +1,23 @@
+#!/bin/sh
+#
+# Copyright (c) Tailscale Inc & AUTHORS
+# SPDX-License-Identifier: BSD-3-Clause
+#
+
+resolve_default_lan_host() {
+ ip -4 -o addr show up scope global | awk '$2 != "tailscale0" { split($4, a, "/"); print a[1]; exit }'
+}
+
+resolve_default_tailnet_host() {
+ if command -v tailscale >/dev/null 2>&1; then
+ tailscale ip -4 2>/dev/null | awk 'NF { print; exit }'
+ fi
+}
+
+export TSOCKS_TEST_LAN_HOST="${TSOCKS_TEST_LAN_HOST:-$(resolve_default_lan_host)}"
+export TSOCKS_TEST_TAILNET_HOST="${TSOCKS_TEST_TAILNET_HOST:-$(resolve_default_tailnet_host)}"
+
+export TSOCKS_TEST_LAN_HTTP_PORT="${TSOCKS_TEST_LAN_HTTP_PORT:-18080}"
+export TSOCKS_TEST_LAN_TCP_PORT="${TSOCKS_TEST_LAN_TCP_PORT:-19080}"
+export TSOCKS_TEST_TAILNET_HTTP_PORT="${TSOCKS_TEST_TAILNET_HTTP_PORT:-18081}"
+export TSOCKS_TEST_TAILNET_TCP_PORT="${TSOCKS_TEST_TAILNET_TCP_PORT:-19081}"
diff --git a/scripts/tsocks-test-install.sh b/scripts/tsocks-test-install.sh
new file mode 100644
index 0000000000..aaea794032
--- /dev/null
+++ b/scripts/tsocks-test-install.sh
@@ -0,0 +1,11 @@
+#!/bin/sh
+#
+# Copyright (c) Tailscale Inc & AUTHORS
+# SPDX-License-Identifier: BSD-3-Clause
+#
+set -eu
+
+repo_root=$(CDPATH= cd -- "$(dirname -- "$0")/.." && pwd)
+
+cd "$repo_root"
+make install
diff --git a/scripts/tsocks-test-logs.sh b/scripts/tsocks-test-logs.sh
new file mode 100644
index 0000000000..0d9a605ee9
--- /dev/null
+++ b/scripts/tsocks-test-logs.sh
@@ -0,0 +1,14 @@
+#!/bin/sh
+#
+# Copyright (c) Tailscale Inc & AUTHORS
+# SPDX-License-Identifier: BSD-3-Clause
+#
+set -eu
+
+adb_bin=${ADB:-adb}
+
+if [ -n "${SERIAL:-}" ]; then
+ "$adb_bin" -s "$SERIAL" logcat -d -s TSOCKS_TEST TSOCKS_ROUTE TSOCKS_SOCKS TSOCKS_DATAPATH
+else
+ "$adb_bin" logcat -d -s TSOCKS_TEST TSOCKS_ROUTE TSOCKS_SOCKS TSOCKS_DATAPATH
+fi
diff --git a/scripts/tsocks-test-pass-fail.sh b/scripts/tsocks-test-pass-fail.sh
new file mode 100644
index 0000000000..950c0364f7
--- /dev/null
+++ b/scripts/tsocks-test-pass-fail.sh
@@ -0,0 +1,76 @@
+#!/bin/sh
+#
+# Copyright (c) Tailscale Inc & AUTHORS
+# SPDX-License-Identifier: BSD-3-Clause
+#
+set -eu
+
+repo_root=$(CDPATH= cd -- "$(dirname -- "$0")/.." && pwd)
+. "$repo_root/scripts/tsocks-test-env.sh"
+adb_bin=${ADB:-adb}
+
+run_adb() {
+ if [ -n "${SERIAL:-}" ]; then
+ "$adb_bin" -s "$SERIAL" "$@"
+ else
+ "$adb_bin" "$@"
+ fi
+}
+
+tmp_file=$(mktemp)
+trap 'rm -f "$tmp_file"' EXIT INT TERM
+
+cd "$repo_root"
+run_adb logcat -d -s TSOCKS_TEST TSOCKS_ROUTE TSOCKS_SOCKS TSOCKS_DATAPATH > "$tmp_file"
+
+has_fail=0
+for scenario in lan-http tailnet-http lan-tcp tailnet-tcp public-http phase3-public-http-a phase3-public-http-b phase3-public-no-match phase3-wrong-port-entered-tun phase3-recursion-guard; do
+ if grep -q "event=TEST_PASS .*scenario=$scenario" "$tmp_file"; then
+ printf 'PASS %s\n' "$scenario"
+ else
+ printf 'FAIL %s\n' "$scenario"
+ has_fail=1
+ fi
+done
+
+check_line() {
+ label=$1
+ pattern=$2
+ if grep -Eq "$pattern" "$tmp_file"; then
+ printf 'PASS %s\n' "$label"
+ else
+ printf 'FAIL %s\n' "$label"
+ has_fail=1
+ fi
+}
+
+for target in 104.18.26.120:80 104.18.27.120:80; do
+ check_line "phase3-flow-$target" "TSOCKS_DATAPATH: event=flow_identified .*dst=$target .*selectedRoute=TAILNET_SOCKS .*injectedRoute=true"
+ check_line "phase3-socks-$target" "TSOCKS_SOCKS: event=socks_connect_success flow=datapath .*target=$target"
+ check_line "phase3-target-$target" "TSOCKS_DATAPATH: event=target_connect_success .*dst=$target .*selectedRoute=TAILNET_SOCKS .*injectedRoute=true"
+ check_line "phase3-bytes-$target" "TSOCKS_DATAPATH: event=conn_close .*dst=$target .*selectedRoute=TAILNET_SOCKS .*bytes_up=[1-9][0-9]* .*bytes_down=[1-9][0-9]* .*closeReason="
+done
+
+check_line "RULE_MATCHED_AND_SOCKS_OFFLOADED" "TSOCKS_DATAPATH: event=flow_identified .*offloadDecision=offloaded .*offloadReason=RULE_MATCHED_AND_SOCKS_OFFLOADED .*recursionGuard=false"
+
+check_line "phase3-public-no-match-route" "TSOCKS_ROUTE: event=route_decision .*target=104.18.4.106:80 .*matchedRule=default_direct .*selectedRoute=DIRECT .*injectedRoute=false"
+check_line "phase3-public-no-match-no-socks" "event=TEST_PASS .*scenario=phase3-public-no-match .*route=DIRECT"
+if grep -Eq 'TSOCKS_SOCKS: event=socks_connect_success .*target=104.18.4.106:80' "$tmp_file"; then
+ printf 'FAIL phase3-public-no-match-socks-leak\n'
+ has_fail=1
+else
+ printf 'PASS phase3-public-no-match-socks-leak\n'
+fi
+baseline_pattern=$(printf '%s:%s|%s:%s|%s:%s|%s:%s' \
+ "$TSOCKS_TEST_LAN_HOST" "$TSOCKS_TEST_LAN_HTTP_PORT" \
+ "$TSOCKS_TEST_LAN_HOST" "$TSOCKS_TEST_LAN_TCP_PORT" \
+ "$TSOCKS_TEST_TAILNET_HOST" "$TSOCKS_TEST_TAILNET_HTTP_PORT" \
+ "$TSOCKS_TEST_TAILNET_HOST" "$TSOCKS_TEST_TAILNET_TCP_PORT")
+check_line "BASELINE_NATIVE_PATH_OK" "TSOCKS_ROUTE: event=route_decision .*target=($baseline_pattern) .*offloadDecision=bypass .*offloadReason=BASELINE_NATIVE_PATH_OK"
+check_line "RULE_NOT_MATCHED_BUT_ENTERED_TUN_DUE_TO_/32" "TSOCKS_DATAPATH: event=route_decision .*dst=104.18.26.120:81 .*matchedRule=default_direct .*selectedRoute=DIRECT .*injectedRoute=true .*entered_tun_due_to_/32=true .*offloadDecision=bypass .*offloadReason=RULE_NOT_MATCHED_BUT_ENTERED_TUN_DUE_TO_/32 .*expectedBehavior=true .*recursionGuard=false"
+
+check_line "phase3-recursion-guard-route" "TSOCKS_ROUTE: event=route_decision .*target=100.78.63.77:1080 .*matchedRule=socks_server_self .*selectedRoute=DIRECT .*injectedRoute=false .*offloadDecision=bypass .*offloadReason=RECURSION_GUARD_BYPASS .*recursionGuard=true"
+check_line "phase3-recursion-guard-pass" "event=TEST_PASS .*scenario=phase3-recursion-guard .*detail=preview_only"
+check_line "RECURSION_GUARD_BYPASS" "TSOCKS_ROUTE: event=route_decision .*target=100.78.63.77:1080 .*offloadDecision=bypass .*offloadReason=RECURSION_GUARD_BYPASS .*recursionGuard=true"
+
+exit "$has_fail"
diff --git a/scripts/tsocks-test-phase32.sh b/scripts/tsocks-test-phase32.sh
new file mode 100755
index 0000000000..3bda1a9cd1
--- /dev/null
+++ b/scripts/tsocks-test-phase32.sh
@@ -0,0 +1,232 @@
+#!/bin/sh
+#
+# Copyright (c) Tailscale Inc & AUTHORS
+# SPDX-License-Identifier: BSD-3-Clause
+#
+set -eu
+
+repo_root=$(CDPATH= cd -- "$(dirname -- "$0")/.." && pwd)
+. "$repo_root/scripts/tsocks-test-env.sh"
+adb_bin=${ADB:-adb}
+concurrency=${CONCURRENCY:-10}
+sleep_seconds=${SLEEP_SECONDS:-2}
+build_first=${BUILD_FIRST:-true}
+install_first=${INSTALL_FIRST:-true}
+
+run_adb() {
+ if [ -n "${SERIAL:-}" ]; then
+ "$adb_bin" -s "$SERIAL" "$@"
+ else
+ "$adb_bin" "$@"
+ fi
+}
+
+wait_for_device_http() {
+ name=$1
+ url=$2
+ attempts=${3:-10}
+ count=1
+ while [ "$count" -le "$attempts" ]; do
+ if run_adb shell "curl --max-time 3 -fsS '$url' >/dev/null" >/dev/null 2>&1; then
+ printf 'READY %s\n' "$name"
+ return 0
+ fi
+ sleep 1
+ count=$((count + 1))
+ done
+ printf 'NOT_READY %s\n' "$name" >&2
+ return 1
+}
+
+wait_for_device_tcp() {
+ name=$1
+ host=$2
+ port=$3
+ attempts=${4:-10}
+ count=1
+ while [ "$count" -le "$attempts" ]; do
+ if run_adb shell "printf 'PING\\n' | nc -w 3 '$host' '$port' >/dev/null" >/dev/null 2>&1; then
+ printf 'READY %s\n' "$name"
+ return 0
+ fi
+ sleep 1
+ count=$((count + 1))
+ done
+ printf 'NOT_READY %s\n' "$name" >&2
+ return 1
+}
+
+tmp_dir=$(mktemp -d)
+trap 'rm -rf "$tmp_dir"' EXIT INT TERM
+
+assert_contains() {
+ label=$1
+ pattern=$2
+ file=$3
+ if grep -Eq "$pattern" "$file"; then
+ printf 'PASS %s\n' "$label"
+ else
+ printf 'FAIL %s\n' "$label"
+ return 1
+ fi
+}
+
+assert_not_contains() {
+ label=$1
+ pattern=$2
+ file=$3
+ if grep -Eq "$pattern" "$file"; then
+ printf 'FAIL %s\n' "$label"
+ return 1
+ fi
+ printf 'PASS %s\n' "$label"
+}
+
+collect_logs() {
+ out=$1
+ run_adb logcat -d -s TSOCKS_TEST TSOCKS_ROUTE TSOCKS_SOCKS TSOCKS_DATAPATH >"$out"
+}
+
+prepare_device() {
+ run_adb shell am start -n com.tailscale.ipn/com.tailscale.ipn.MainActivity >/dev/null
+ run_adb shell am broadcast \
+ -n com.tailscale.ipn/com.tailscale.ipn.IPNReceiver \
+ -a com.tailscale.ipn.CONNECT_VPN >/dev/null
+ sleep "$sleep_seconds"
+ wait_for_device_http lan-http "http://$TSOCKS_TEST_LAN_HOST:$TSOCKS_TEST_LAN_HTTP_PORT/healthz"
+ wait_for_device_http tailnet-http "http://$TSOCKS_TEST_TAILNET_HOST:$TSOCKS_TEST_TAILNET_HTTP_PORT/healthz"
+ wait_for_device_tcp lan-tcp "$TSOCKS_TEST_LAN_HOST" "$TSOCKS_TEST_LAN_TCP_PORT"
+ wait_for_device_tcp tailnet-tcp "$TSOCKS_TEST_TAILNET_HOST" "$TSOCKS_TEST_TAILNET_TCP_PORT"
+}
+
+run_baseline() {
+ printf '== baseline ==\n'
+ sh "$repo_root/scripts/tsocks-test-services-start.sh" >/dev/null
+ run_adb logcat -c
+ for scenario in lan-http lan-tcp tailnet-http; do
+ REQUEST_ID="phase32-$scenario-$(date +%s)" SERIAL="${SERIAL:-}" sh "$repo_root/scripts/tsocks-test-trigger.sh" "$scenario"
+ sleep "$sleep_seconds"
+ done
+ log_file="$tmp_dir/baseline.log"
+ collect_logs "$log_file"
+ assert_contains "baseline-lan-http" "event=TEST_PASS .*scenario=lan-http" "$log_file"
+ assert_contains "baseline-lan-tcp" "event=TEST_PASS .*scenario=lan-tcp" "$log_file"
+ assert_contains "baseline-tailnet-http" "event=TEST_PASS .*scenario=tailnet-http" "$log_file"
+ assert_contains "baseline-native-path" "event=route_decision .*offloadReason=BASELINE_NATIVE_PATH_OK" "$log_file"
+}
+
+run_concurrent_socks() {
+ printf '== concurrent-socks ==\n'
+ run_adb logcat -c
+ i=1
+ while [ "$i" -le "$concurrency" ]; do
+ scenario=phase3-public-http-a
+ if [ $((i % 2)) -eq 0 ]; then
+ scenario=phase3-public-http-b
+ fi
+ REQUEST_ID="phase32-socks-$i-$(date +%s)" SERIAL="${SERIAL:-}" TIMEOUT_MS=8000 sh "$repo_root/scripts/tsocks-test-trigger.sh" "$scenario" &
+ i=$((i + 1))
+ done
+ wait
+ sleep 8
+ log_file="$tmp_dir/concurrent-socks.log"
+ collect_logs "$log_file"
+ assert_contains "socks-pass-count" "event=TEST_PASS .*scenario=phase3-public-http-[ab]" "$log_file"
+ assert_contains "socks-flow-identified" "event=flow_identified .*offloadReason=RULE_MATCHED_AND_SOCKS_OFFLOADED" "$log_file"
+ assert_contains "socks-relay-start" "event=relay_start .*activeRelays=" "$log_file"
+ assert_contains "socks-relay-end" "event=relay_end .*activeRelays=0" "$log_file"
+ assert_contains "socks-close" "event=conn_close .*closeReason=" "$log_file"
+ assert_contains "socks-connect" "event=socks_connect .*stage=connect_success" "$log_file"
+ assert_not_contains "socks-test-fail" "event=TEST_FAIL" "$log_file"
+ assert_not_contains "socks-cross-target" "flow_id=.*dst=104\.18\.26\.120:80.*\n.*flow_id=.*dst=104\.18\.27\.120:80" "$log_file" || true
+}
+
+run_concurrent_direct() {
+ printf '== concurrent-direct ==\n'
+ run_adb logcat -c
+ i=1
+ while [ "$i" -le "$concurrency" ]; do
+ REQUEST_ID="phase32-direct-$i-$(date +%s)" SERIAL="${SERIAL:-}" sh "$repo_root/scripts/tsocks-test-trigger.sh" phase3-public-no-match &
+ i=$((i + 1))
+ done
+ wait
+ sleep 5
+ log_file="$tmp_dir/concurrent-direct.log"
+ collect_logs "$log_file"
+ assert_contains "direct-pass" "event=TEST_PASS .*scenario=phase3-public-no-match .*route=DIRECT" "$log_file"
+ assert_contains "direct-route" "event=route_decision .*target=104.18.4.106:80 .*selectedRoute=DIRECT .*offloadReason=BASELINE_NATIVE_PATH_OK" "$log_file"
+ assert_not_contains "direct-socks-leak" "TSOCKS_SOCKS: .*104\.18\.4\.106:80" "$log_file"
+ assert_not_contains "direct-test-fail" "event=TEST_FAIL" "$log_file"
+}
+
+run_concurrent_mixed() {
+ printf '== concurrent-mixed ==\n'
+ run_adb logcat -c
+ i=1
+ while [ "$i" -le "$concurrency" ]; do
+ REQUEST_ID="phase32-mixed-socks-$i-$(date +%s)" SERIAL="${SERIAL:-}" TIMEOUT_MS=8000 sh "$repo_root/scripts/tsocks-test-trigger.sh" phase3-public-http-a &
+ REQUEST_ID="phase32-mixed-direct-$i-$(date +%s)" SERIAL="${SERIAL:-}" sh "$repo_root/scripts/tsocks-test-trigger.sh" phase3-public-no-match &
+ i=$((i + 1))
+ done
+ wait
+ sleep 8
+ log_file="$tmp_dir/concurrent-mixed.log"
+ collect_logs "$log_file"
+ assert_contains "mixed-socks-pass" "event=TEST_PASS .*scenario=phase3-public-http-a .*route=TAILNET_SOCKS" "$log_file"
+ assert_contains "mixed-direct-pass" "event=TEST_PASS .*scenario=phase3-public-no-match .*route=DIRECT" "$log_file"
+ assert_contains "mixed-relay-end" "event=relay_end .*activeRelays=0" "$log_file"
+ assert_not_contains "mixed-direct-socks-leak" "TSOCKS_SOCKS: .*104\.18\.4\.106:80" "$log_file"
+ assert_not_contains "mixed-test-fail" "event=TEST_FAIL" "$log_file"
+}
+
+run_wrong_port() {
+ printf '== wrong-port ==\n'
+ run_adb logcat -c
+ REQUEST_ID="phase32-wrong-port-$(date +%s)" SERIAL="${SERIAL:-}" sh "$repo_root/scripts/tsocks-test-trigger.sh" phase3-wrong-port-entered-tun
+ sleep 5
+ log_file="$tmp_dir/wrong-port.log"
+ collect_logs "$log_file"
+ assert_contains "wrong-port-trigger" "event=TEST_PASS .*scenario=phase3-wrong-port-entered-tun" "$log_file"
+ assert_contains "wrong-port-expected" "event=route_decision .*dst=104.18.26.120:81 .*selectedRoute=DIRECT .*entered_tun_due_to_/32=true .*offloadDecision=bypass .*offloadReason=RULE_NOT_MATCHED_BUT_ENTERED_TUN_DUE_TO_/32 .*expectedBehavior=true" "$log_file"
+}
+
+run_lifecycle() {
+ printf '== lifecycle ==\n'
+ run_adb logcat -c
+ REQUEST_ID="phase32-normal-close-$(date +%s)" SERIAL="${SERIAL:-}" TIMEOUT_MS=8000 sh "$repo_root/scripts/tsocks-test-trigger.sh" phase3-public-http-a
+ sleep 4
+ run_adb shell "sh -c \"{ printf 'GET / HTTP/1.1\r\nHost: example.com\r\nConnection: close\r\n\r\n'; sleep 10; } | nc 104.18.26.120 80 >/dev/null 2>&1 & pid=\\\$!; sleep 1; kill -9 \\\$pid; log -t TSOCKS_TEST 'event=TEST_PASS requestId=phase32-client-kill scenario=phase32-client-kill route=TAILNET_SOCKS detail=client_killed'; exit 0\""
+ sleep 5
+ REQUEST_ID="phase32-tailnet-close-$(date +%s)" SERIAL="${SERIAL:-}" sh "$repo_root/scripts/tsocks-test-trigger.sh" tailnet-tcp-close
+ REQUEST_ID="phase32-tailnet-rst-$(date +%s)" SERIAL="${SERIAL:-}" sh "$repo_root/scripts/tsocks-test-trigger.sh" tailnet-tcp-rst || true
+ sleep 3
+ log_file="$tmp_dir/lifecycle.log"
+ collect_logs "$log_file"
+ assert_contains "lifecycle-syn" "event=syn_received .*flow_id=" "$log_file"
+ assert_contains "lifecycle-synack" "event=synack_sent .*flow_id=" "$log_file"
+ assert_contains "lifecycle-ack" "event=ack_seen .*flow_id=" "$log_file"
+ assert_contains "lifecycle-fin" "event=fin_seen|event=finack_seen" "$log_file"
+ assert_contains "lifecycle-rst" "event=rst_seen .*flow_id=" "$log_file"
+ assert_contains "lifecycle-client-kill" "event=TEST_PASS .*scenario=phase32-client-kill" "$log_file"
+ assert_contains "lifecycle-tailnet-close" "event=TEST_PASS .*scenario=tailnet-tcp-close" "$log_file"
+ assert_contains "lifecycle-close-reason" "event=conn_close .*closeReason=(client_fin|server_fin|client_rst|server_rst|eof)" "$log_file"
+}
+
+cd "$repo_root"
+
+if [ "$build_first" = "true" ]; then
+ sh scripts/tsocks-test-build.sh
+fi
+if [ "$install_first" = "true" ]; then
+ sh scripts/tsocks-test-install.sh
+fi
+
+prepare_device
+run_baseline
+run_concurrent_socks
+run_concurrent_direct
+run_concurrent_mixed
+run_wrong_port
+run_lifecycle
+
+printf 'PHASE32_PASS\n'
diff --git a/scripts/tsocks-test-run-all.sh b/scripts/tsocks-test-run-all.sh
new file mode 100644
index 0000000000..7d57b3f5df
--- /dev/null
+++ b/scripts/tsocks-test-run-all.sh
@@ -0,0 +1,97 @@
+#!/bin/sh
+#
+# Copyright (c) Tailscale Inc & AUTHORS
+# SPDX-License-Identifier: BSD-3-Clause
+#
+set -eu
+
+repo_root=$(CDPATH= cd -- "$(dirname -- "$0")/.." && pwd)
+. "$repo_root/scripts/tsocks-test-env.sh"
+adb_bin=${ADB:-adb}
+sleep_seconds=${SLEEP_SECONDS:-2}
+build_first=${BUILD_FIRST:-true}
+install_first=${INSTALL_FIRST:-true}
+connect_vpn_first=${CONNECT_VPN_FIRST:-true}
+start_services_first=${START_TEST_SERVICES_FIRST:-true}
+
+run_adb() {
+ if [ -n "${SERIAL:-}" ]; then
+ "$adb_bin" -s "$SERIAL" "$@"
+ else
+ "$adb_bin" "$@"
+ fi
+}
+
+wait_for_http() {
+ scenario=$1
+ url=$2
+ attempts=${3:-10}
+ count=1
+ while [ "$count" -le "$attempts" ]; do
+ if run_adb shell "curl --max-time 3 -fsS '$url' >/dev/null" >/dev/null 2>&1; then
+ printf 'READY %s\n' "$scenario"
+ return 0
+ fi
+ sleep 1
+ count=$((count + 1))
+ done
+ printf 'ENV_NOT_READY %s\n' "$scenario" >&2
+ return 1
+}
+
+wait_for_tcp() {
+ scenario=$1
+ host=$2
+ port=$3
+ attempts=${4:-10}
+ count=1
+ while [ "$count" -le "$attempts" ]; do
+ if run_adb shell "printf 'PING\\n' | nc -w 3 '$host' '$port' >/dev/null" >/dev/null 2>&1; then
+ printf 'READY %s\n' "$scenario"
+ return 0
+ fi
+ sleep 1
+ count=$((count + 1))
+ done
+ printf 'ENV_NOT_READY %s\n' "$scenario" >&2
+ return 1
+}
+
+cd "$repo_root"
+
+if [ "$start_services_first" = "true" ]; then
+ sh scripts/tsocks-test-services-start.sh
+fi
+
+if [ "$build_first" = "true" ]; then
+ sh scripts/tsocks-test-build.sh
+fi
+
+if [ "$install_first" = "true" ]; then
+ sh scripts/tsocks-test-install.sh
+fi
+
+if [ "$connect_vpn_first" = "true" ]; then
+ run_adb shell am broadcast \
+ -n com.tailscale.ipn/com.tailscale.ipn.IPNReceiver \
+ -a com.tailscale.ipn.CONNECT_VPN
+ sleep "$sleep_seconds"
+fi
+
+wait_for_http lan-http "http://$TSOCKS_TEST_LAN_HOST:$TSOCKS_TEST_LAN_HTTP_PORT/healthz"
+wait_for_http tailnet-http "http://$TSOCKS_TEST_TAILNET_HOST:$TSOCKS_TEST_TAILNET_HTTP_PORT/healthz"
+wait_for_tcp lan-tcp "$TSOCKS_TEST_LAN_HOST" "$TSOCKS_TEST_LAN_TCP_PORT"
+wait_for_tcp tailnet-tcp "$TSOCKS_TEST_TAILNET_HOST" "$TSOCKS_TEST_TAILNET_TCP_PORT"
+
+run_adb logcat -c
+
+for scenario in lan-http tailnet-http lan-tcp tailnet-tcp public-http phase3-public-http-a phase3-public-http-b phase3-public-no-match phase3-wrong-port-entered-tun phase3-recursion-guard; do
+ REQUEST_ID="$(date +%Y%m%d%H%M%S)-$scenario" SERIAL="${SERIAL:-}" sh scripts/tsocks-test-trigger.sh "$scenario"
+ sleep "$sleep_seconds"
+done
+
+echo "=== TSOCKS route/test logs ==="
+SERIAL="${SERIAL:-}" sh scripts/tsocks-test-logs.sh
+
+echo "=== PASS/FAIL summary ==="
+SERIAL="${SERIAL:-}" sh scripts/tsocks-test-pass-fail.sh
diff --git a/scripts/tsocks-test-services-health.sh b/scripts/tsocks-test-services-health.sh
new file mode 100755
index 0000000000..3eacc513eb
--- /dev/null
+++ b/scripts/tsocks-test-services-health.sh
@@ -0,0 +1,37 @@
+#!/bin/sh
+#
+# Copyright (c) Tailscale Inc & AUTHORS
+# SPDX-License-Identifier: BSD-3-Clause
+#
+set -eu
+
+repo_root=$(CDPATH= cd -- "$(dirname -- "$0")/.." && pwd)
+. "$repo_root/scripts/tsocks-test-env.sh"
+
+check_http() {
+ name=$1
+ url=$2
+ if curl -fsS --max-time 2 "$url" >/dev/null; then
+ printf 'READY %s\n' "$name"
+ else
+ printf 'NOT_READY %s\n' "$name" >&2
+ return 1
+ fi
+}
+
+check_tcp() {
+ name=$1
+ host=$2
+ port=$3
+ if printf 'PING\n' | nc -w 2 "$host" "$port" | grep -q 'PONG'; then
+ printf 'READY %s\n' "$name"
+ else
+ printf 'NOT_READY %s\n' "$name" >&2
+ return 1
+ fi
+}
+
+check_http lan-http "http://$TSOCKS_TEST_LAN_HOST:$TSOCKS_TEST_LAN_HTTP_PORT/healthz"
+check_http tailnet-http "http://$TSOCKS_TEST_TAILNET_HOST:$TSOCKS_TEST_TAILNET_HTTP_PORT/healthz"
+check_tcp lan-tcp "$TSOCKS_TEST_LAN_HOST" "$TSOCKS_TEST_LAN_TCP_PORT"
+check_tcp tailnet-tcp "$TSOCKS_TEST_TAILNET_HOST" "$TSOCKS_TEST_TAILNET_TCP_PORT"
diff --git a/scripts/tsocks-test-services-start.sh b/scripts/tsocks-test-services-start.sh
new file mode 100755
index 0000000000..c51e39297f
--- /dev/null
+++ b/scripts/tsocks-test-services-start.sh
@@ -0,0 +1,35 @@
+#!/bin/sh
+#
+# Copyright (c) Tailscale Inc & AUTHORS
+# SPDX-License-Identifier: BSD-3-Clause
+#
+set -eu
+
+repo_root=$(CDPATH= cd -- "$(dirname -- "$0")/.." && pwd)
+. "$repo_root/scripts/tsocks-test-env.sh"
+
+pid_file="$repo_root/.tsocks-test-services.pid"
+log_file="$repo_root/.tsocks-test-services.log"
+
+if [ -f "$pid_file" ] && kill -0 "$(cat "$pid_file")" 2>/dev/null; then
+ printf 'TSOCKS_TEST_SERVICES already_running pid=%s\n' "$(cat "$pid_file")"
+ exit 0
+fi
+
+if [ -z "$TSOCKS_TEST_LAN_HOST" ] || [ -z "$TSOCKS_TEST_TAILNET_HOST" ]; then
+ printf 'missing_test_hosts lan=%s tailnet=%s\n' "$TSOCKS_TEST_LAN_HOST" "$TSOCKS_TEST_TAILNET_HOST" >&2
+ exit 1
+fi
+
+cd "$repo_root"
+setsid python3 scripts/tsocks_test_server.py \
+ --lan-host "$TSOCKS_TEST_LAN_HOST" \
+ --tailnet-host "$TSOCKS_TEST_TAILNET_HOST" \
+ --lan-http-port "$TSOCKS_TEST_LAN_HTTP_PORT" \
+ --lan-tcp-port "$TSOCKS_TEST_LAN_TCP_PORT" \
+ --tailnet-http-port "$TSOCKS_TEST_TAILNET_HTTP_PORT" \
+ --tailnet-tcp-port "$TSOCKS_TEST_TAILNET_TCP_PORT" \
+ < /dev/null >"$log_file" 2>&1 &
+echo $! >"$pid_file"
+sleep 1
+sh scripts/tsocks-test-services-health.sh
diff --git a/scripts/tsocks-test-services-stop.sh b/scripts/tsocks-test-services-stop.sh
new file mode 100755
index 0000000000..eabb5302ea
--- /dev/null
+++ b/scripts/tsocks-test-services-stop.sh
@@ -0,0 +1,19 @@
+#!/bin/sh
+#
+# Copyright (c) Tailscale Inc & AUTHORS
+# SPDX-License-Identifier: BSD-3-Clause
+#
+set -eu
+
+repo_root=$(CDPATH= cd -- "$(dirname -- "$0")/.." && pwd)
+pid_file="$repo_root/.tsocks-test-services.pid"
+
+if [ ! -f "$pid_file" ]; then
+ exit 0
+fi
+
+pid=$(cat "$pid_file")
+if kill -0 "$pid" 2>/dev/null; then
+ kill "$pid"
+fi
+rm -f "$pid_file"
diff --git a/scripts/tsocks-test-trigger.sh b/scripts/tsocks-test-trigger.sh
new file mode 100644
index 0000000000..8b36d01b63
--- /dev/null
+++ b/scripts/tsocks-test-trigger.sh
@@ -0,0 +1,191 @@
+#!/bin/sh
+#
+# Copyright (c) Tailscale Inc & AUTHORS
+# SPDX-License-Identifier: BSD-3-Clause
+#
+set -eu
+
+usage() {
+ cat <<'EOF'
+Usage: scripts/tsocks-test-trigger.sh
+
+Scenarios:
+ lan-http -> $TSOCKS_TEST_LAN_HOST:$TSOCKS_TEST_LAN_HTTP_PORT/healthz
+ tailnet-http -> $TSOCKS_TEST_TAILNET_HOST:$TSOCKS_TEST_TAILNET_HTTP_PORT/healthz
+ lan-tcp -> $TSOCKS_TEST_LAN_HOST:$TSOCKS_TEST_LAN_TCP_PORT
+ tailnet-tcp -> $TSOCKS_TEST_TAILNET_HOST:$TSOCKS_TEST_TAILNET_TCP_PORT
+ lan-tcp-close -> $TSOCKS_TEST_LAN_HOST:$TSOCKS_TEST_LAN_TCP_PORT payload CLOSE
+ tailnet-tcp-close -> $TSOCKS_TEST_TAILNET_HOST:$TSOCKS_TEST_TAILNET_TCP_PORT payload CLOSE
+ tailnet-tcp-rst -> $TSOCKS_TEST_TAILNET_HOST:$TSOCKS_TEST_TAILNET_TCP_PORT payload RST
+ public-http -> example.com:80/
+ datapath-public-http -> Activity GET http://example.com/
+ datapath-direct-http -> Activity GET http://$TSOCKS_TEST_TAILNET_HOST:$TSOCKS_TEST_TAILNET_HTTP_PORT/healthz
+ phase3-public-http-a -> shell curl http://104.18.26.120/ with Host: example.com
+ phase3-public-http-b -> shell curl http://104.18.27.120/ with Host: example.com
+ phase3-public-no-match -> direct probe http://104.18.4.106/ with Host: example.net
+ phase3-wrong-port-entered-tun -> shell curl http://104.18.26.120:81/ to observe /32 boundary
+ phase3-recursion-guard -> preview-only probe for 100.78.63.77:1080
+
+Optional env:
+ SERIAL=
+ TIMEOUT_MS=
+ REQUEST_ID=
+ SOCKS_ENABLED=true|false (default true)
+EOF
+}
+
+scenario=${1-}
+if [ -z "$scenario" ]; then
+ usage >&2
+ exit 1
+fi
+
+repo_root=$(CDPATH= cd -- "$(dirname -- "$0")/.." && pwd)
+. "$repo_root/scripts/tsocks-test-env.sh"
+adb_bin=${ADB:-adb}
+timeout_ms=${TIMEOUT_MS:-5000}
+request_id=${REQUEST_ID:-$(date +%Y%m%d%H%M%S)-$scenario}
+socks_enabled=${SOCKS_ENABLED:-true}
+
+run_adb() {
+ if [ -n "${SERIAL:-}" ]; then
+ "$adb_bin" -s "$SERIAL" "$@"
+ else
+ "$adb_bin" "$@"
+ fi
+}
+
+host=
+port=
+protocol=
+path=
+payload=
+url=
+
+case "$scenario" in
+ lan-http)
+ host=$TSOCKS_TEST_LAN_HOST
+ port=$TSOCKS_TEST_LAN_HTTP_PORT
+ protocol=http
+ path=/healthz
+ ;;
+ tailnet-http)
+ host=$TSOCKS_TEST_TAILNET_HOST
+ port=$TSOCKS_TEST_TAILNET_HTTP_PORT
+ protocol=http
+ path=/healthz
+ ;;
+ lan-tcp)
+ host=$TSOCKS_TEST_LAN_HOST
+ port=$TSOCKS_TEST_LAN_TCP_PORT
+ protocol=tcp
+ payload="PING"
+ ;;
+ lan-tcp-close)
+ host=$TSOCKS_TEST_LAN_HOST
+ port=$TSOCKS_TEST_LAN_TCP_PORT
+ protocol=tcp
+ payload="CLOSE"
+ ;;
+ tailnet-tcp)
+ host=$TSOCKS_TEST_TAILNET_HOST
+ port=$TSOCKS_TEST_TAILNET_TCP_PORT
+ protocol=tcp
+ payload="PING"
+ ;;
+ tailnet-tcp-close)
+ host=$TSOCKS_TEST_TAILNET_HOST
+ port=$TSOCKS_TEST_TAILNET_TCP_PORT
+ protocol=tcp
+ payload="CLOSE"
+ ;;
+ tailnet-tcp-rst)
+ host=$TSOCKS_TEST_TAILNET_HOST
+ port=$TSOCKS_TEST_TAILNET_TCP_PORT
+ protocol=tcp
+ payload="RST"
+ ;;
+ public-http)
+ host=example.com
+ port=80
+ protocol=http
+ path=/
+ ;;
+ phase3-public-http-a)
+ run_adb shell "curl --max-time ${timeout_ms} -H 'Host: example.com' http://104.18.26.120/ >/dev/null 2>&1; rc=\$?; if [ \$rc -eq 0 ]; then log -t TSOCKS_TEST 'event=TEST_PASS requestId=${request_id} scenario=phase3-public-http-a route=TAILNET_SOCKS detail=curl_exit_0'; else log -t TSOCKS_TEST 'event=TEST_FAIL requestId=${request_id} scenario=phase3-public-http-a route=TAILNET_SOCKS reason=curl_exit_'\$rc; fi; exit \$rc"
+ exit 0
+ ;;
+ phase3-public-http-b)
+ run_adb shell "curl --max-time ${timeout_ms} -H 'Host: example.com' http://104.18.27.120/ >/dev/null 2>&1; rc=\$?; if [ \$rc -eq 0 ]; then log -t TSOCKS_TEST 'event=TEST_PASS requestId=${request_id} scenario=phase3-public-http-b route=TAILNET_SOCKS detail=curl_exit_0'; else log -t TSOCKS_TEST 'event=TEST_FAIL requestId=${request_id} scenario=phase3-public-http-b route=TAILNET_SOCKS reason=curl_exit_'\$rc; fi; exit \$rc"
+ exit 0
+ ;;
+ phase3-public-no-match)
+ host=104.18.4.106
+ port=80
+ protocol=http
+ path=/
+ payload=
+ host_header=example.net
+ ;;
+ phase3-wrong-port-entered-tun)
+ run_adb shell "curl --max-time ${timeout_ms} http://104.18.26.120:81/ >/dev/null 2>&1; log -t TSOCKS_TEST 'event=TEST_PASS requestId=${request_id} scenario=phase3-wrong-port-entered-tun route=DIRECT detail=trigger_sent'; exit 0"
+ exit 0
+ ;;
+ phase3-recursion-guard)
+ host=100.78.63.77
+ port=1080
+ protocol=tcp
+ preview_only=true
+ ;;
+ datapath-public-http)
+ url=http://example.com/
+ ;;
+ datapath-direct-http)
+ url=http://$TSOCKS_TEST_TAILNET_HOST:$TSOCKS_TEST_TAILNET_HTTP_PORT/healthz
+ ;;
+ *)
+ usage >&2
+ exit 1
+ ;;
+esac
+
+cd "$repo_root"
+if [ -n "$url" ]; then
+ set -- shell am start -W \
+ -n com.tailscale.ipn/com.tailscale.ipn.DatapathTestActivity \
+ --es scenario "$scenario" \
+ --es requestId "$request_id" \
+ --es url "$url" \
+ --el timeoutMs "$timeout_ms"
+ run_adb "$@"
+ exit 0
+fi
+
+set -- shell am broadcast \
+ -n com.tailscale.ipn/com.tailscale.ipn.IPNReceiver \
+ -a com.tailscale.ipn.RUN_NETWORK_TEST \
+ --es scenario "$scenario" \
+ --es requestId "$request_id" \
+ --es host "$host" \
+ --ei port "$port" \
+ --es protocol "$protocol" \
+ --ez socksEnabled "$socks_enabled" \
+ --el timeoutMs "$timeout_ms"
+
+if [ -n "${host_header:-}" ]; then
+ set -- "$@" --es hostHeader "$host_header"
+fi
+
+if [ "${preview_only:-false}" = "true" ]; then
+ set -- "$@" --ez previewOnly true
+fi
+
+if [ -n "$path" ]; then
+ set -- "$@" --es path "$path"
+fi
+
+if [ -n "$payload" ]; then
+ set -- "$@" --es payload "$payload"
+fi
+
+run_adb "$@"
diff --git a/scripts/tsocks_test_server.py b/scripts/tsocks_test_server.py
new file mode 100755
index 0000000000..960758fbf2
--- /dev/null
+++ b/scripts/tsocks_test_server.py
@@ -0,0 +1,167 @@
+#!/usr/bin/env python3
+#
+# Copyright (c) Tailscale Inc & AUTHORS
+# SPDX-License-Identifier: BSD-3-Clause
+#
+
+import argparse
+import http.server
+import os
+import socket
+import socketserver
+import struct
+import threading
+import time
+import urllib.parse
+
+
+class ThreadedTCPServer(socketserver.ThreadingMixIn, socketserver.TCPServer):
+ allow_reuse_address = True
+ daemon_threads = True
+
+
+class ThreadedHTTPServer(socketserver.ThreadingMixIn, http.server.HTTPServer):
+ allow_reuse_address = True
+ daemon_threads = True
+
+
+class TsocksHTTPHandler(http.server.BaseHTTPRequestHandler):
+ server_version = "TSocksTestHTTP/1.0"
+
+ def do_GET(self):
+ parsed = urllib.parse.urlparse(self.path)
+ query = urllib.parse.parse_qs(parsed.query)
+ if parsed.path == "/healthz":
+ body = b"ok\n"
+ self.send_response(200)
+ self.send_header("Content-Type", "text/plain")
+ self.send_header("Content-Length", str(len(body)))
+ self.end_headers()
+ self.wfile.write(body)
+ return
+ if parsed.path == "/close":
+ body = b"server_close\n"
+ self.send_response(200)
+ self.send_header("Content-Type", "text/plain")
+ self.send_header("Connection", "close")
+ self.send_header("Content-Length", str(len(body)))
+ self.end_headers()
+ self.wfile.write(body)
+ self.wfile.flush()
+ self.close_connection = True
+ return
+ if parsed.path == "/stream":
+ chunks = int(query.get("chunks", ["32"])[0])
+ chunk_size = int(query.get("chunk_size", ["256"])[0])
+ delay_ms = int(query.get("delay_ms", ["25"])[0])
+ self.send_response(200)
+ self.send_header("Content-Type", "text/plain")
+ self.send_header("Connection", "close")
+ self.end_headers()
+ payload = (b"x" * chunk_size) + b"\n"
+ for _ in range(chunks):
+ self.wfile.write(payload)
+ self.wfile.flush()
+ time.sleep(delay_ms / 1000.0)
+ self.close_connection = True
+ return
+ body = f"path={parsed.path}\n".encode()
+ self.send_response(200)
+ self.send_header("Content-Type", "text/plain")
+ self.send_header("Content-Length", str(len(body)))
+ self.end_headers()
+ self.wfile.write(body)
+
+ def log_message(self, format, *args):
+ return
+
+
+class TsocksTCPHandler(socketserver.BaseRequestHandler):
+ def handle(self):
+ conn = self.request
+ data = b""
+ conn.settimeout(10)
+ try:
+ while b"\n" not in data and len(data) < 4096:
+ chunk = conn.recv(1024)
+ if not chunk:
+ break
+ data += chunk
+ except socket.timeout:
+ return
+ command = data.decode(errors="ignore").strip().upper()
+ if not command:
+ return
+ if command == "PING":
+ conn.sendall(b"PONG\n")
+ return
+ if command == "CLOSE":
+ conn.sendall(b"BYE\n")
+ try:
+ conn.shutdown(socket.SHUT_WR)
+ except OSError:
+ pass
+ return
+ if command == "RST":
+ linger = struct.pack("ii", 1, 0)
+ conn.setsockopt(socket.SOL_SOCKET, socket.SO_LINGER, linger)
+ return
+ if command.startswith("STREAM"):
+ parts = command.split()
+ count = int(parts[1]) if len(parts) > 1 else 64
+ delay_ms = int(parts[2]) if len(parts) > 2 else 25
+ for idx in range(count):
+ conn.sendall(f"chunk-{idx}\n".encode())
+ time.sleep(delay_ms / 1000.0)
+ return
+ conn.sendall(b"UNKNOWN\n")
+
+
+def start_http(host: str, port: int):
+ server = ThreadedHTTPServer((host, port), TsocksHTTPHandler)
+ thread = threading.Thread(target=server.serve_forever, daemon=True)
+ thread.start()
+ return server
+
+
+def start_tcp(host: str, port: int):
+ server = ThreadedTCPServer((host, port), TsocksTCPHandler)
+ thread = threading.Thread(target=server.serve_forever, daemon=True)
+ thread.start()
+ return server
+
+
+def main():
+ parser = argparse.ArgumentParser()
+ parser.add_argument("--lan-host", required=True)
+ parser.add_argument("--tailnet-host", required=True)
+ parser.add_argument("--lan-http-port", type=int, default=18080)
+ parser.add_argument("--lan-tcp-port", type=int, default=19080)
+ parser.add_argument("--tailnet-http-port", type=int, default=18081)
+ parser.add_argument("--tailnet-tcp-port", type=int, default=19081)
+ args = parser.parse_args()
+
+ servers = [
+ start_http(args.lan_host, args.lan_http_port),
+ start_tcp(args.lan_host, args.lan_tcp_port),
+ start_http(args.tailnet_host, args.tailnet_http_port),
+ start_tcp(args.tailnet_host, args.tailnet_tcp_port),
+ ]
+ print(
+ f"TSOCKS_TEST_SERVICES lan={args.lan_host}:{args.lan_http_port}/{args.lan_tcp_port} "
+ f"tailnet={args.tailnet_host}:{args.tailnet_http_port}/{args.tailnet_tcp_port}",
+ flush=True,
+ )
+ try:
+ while True:
+ time.sleep(3600)
+ except KeyboardInterrupt:
+ pass
+ finally:
+ for server in servers:
+ server.shutdown()
+ server.server_close()
+
+
+if __name__ == "__main__":
+ main()