| Commit message (Collapse) | Author | Age | Files | Lines |
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the default value on the machines seems to be 262144 but on some larger
experiments dmesg will sometimes show the following logs:
[Fri Aug 8 05:01:42 2025] TCP: too many orphaned sockets
[Fri Aug 8 05:01:42 2025] TCP: too many orphaned sockets
[Fri Aug 8 05:01:42 2025] TCP: too many orphaned sockets
[Fri Aug 8 05:01:42 2025] TCP: too many orphaned sockets
[Fri Aug 8 05:01:42 2025] TCP: too many orphaned sockets
[Fri Aug 8 05:01:42 2025] TCP: too many orphaned sockets
[Fri Aug 8 05:01:42 2025] TCP: too many orphaned sockets
[Fri Aug 8 05:01:42 2025] TCP: too many orphaned sockets
[Fri Aug 8 05:01:42 2025] TCP: too many orphaned sockets
[Fri Aug 8 05:01:42 2025] TCP: too many orphaned sockets
hopefully increasing this limit will fix that.
https://serverfault.com/questions/624911/what-does-tcp-too-many-orphaned-sockets-mean
the second answer on server faul also says it could be due to tcp memory
limits:
```
The possible cause of this error is system run out of socket memory.Either you need to increase the socket memory(net.ipv4.tcp_mem) or find out the cause of memory consumption
[root@test ~]# cat /proc/sys/net/ipv4/tcp_mem
362688 483584 725376
So here in my system you can see 725376(pages)*4096=2971140096bytes/1024*1024=708 megabyte
So this 708 megabyte of memory is used by application for sending and receiving data as well as utilized by my loopback interface.If at any stage this value reached no further socket can be made until this memory is released from the application which are holding socket open which you can determine using netstat -antulp.
```
but for now I will just increase the max orphans and see if that is
enough.
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
dmesg was showing this messages:
[Thu Aug 7 14:05:26 2025] net_ratelimit: 4328 callbacks suppressed
[Thu Aug 7 14:05:26 2025] neighbour: arp_cache: neighbor table overflow!
[Thu Aug 7 14:05:26 2025] neighbour: arp_cache: neighbor table overflow!
[Thu Aug 7 14:05:26 2025] neighbour: arp_cache: neighbor table overflow!
[Thu Aug 7 14:05:26 2025] neighbour: arp_cache: neighbor table overflow!
and the machines were becoming inaccessible. increase the arp cache size
fixes this.
|
| |
|
|
|
|
|
|
|
| |
we were hitting conntrack limits when opening lots of connections and
sending UDP packets to many different hosts. this resulted in TCP
packets getting dropped which would manifest itself as errors when
connecting or timeouts and when sending UDP packets using `sendto` it
would fail with permission denied error. disabling conntrack fixes all
of these problems.
|
| |
|
|
|
|
| |
there were messages similar to:
HTB: quantum of class 10020 is small. Consider r2q change.
that showed up when brining up the network. this commit fixes that.
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
| |
the env var OAR_P2P_CONCURRENCY_LIMIT limits the number of parallel
"operations" being done on the cluster machines. so, if it is set to 3,
then we only work on 3 machines at time. setting to 0 means unlimited.
|
| | |
|
| |
|
|
|
|
|
| |
currently the shell script used to list 10.0.0.0/8 range of addresses on
a machine would fail with exit code 1 if no addresses were present in
that range (i.e. grep did not match anything). this fix just makes sure
that command always returns exit code 0.
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
| |
- Add generate-schedule.sh script to create container schedules from addresses.txt
- Add benchmark-startup Python script for analyzing container startup times
- Update demo.sh to print timestamps and wait for start signal at /oar-p2p/start
- Add comprehensive statistics including startup, start signal, and waiting times
- Support for synchronized container coordination via start signal file
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
| |
Remove Rust-related files (Cargo.toml, Cargo.lock, src/, target/) and restructure as Python project using uv for dependency management. Update project structure to match nova-oar-mcp style with pyproject.toml, .python-version, and proper Python packaging conventions.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <[email protected]>
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
- Complete Python script for OAR P2P network setup
- LatencyMatrix class for loading and validating square matrices
- Interface preparation and configuration with parallel execution
- TC latency emulation using netem (WIP - fixing class issues)
- Batch IP and TC operations for efficiency
- Docker containerized execution for consistent tooling
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <[email protected]>
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
- Add clap for CLI argument parsing with job_id, addresses, and latency_matrix
- Add serde/serde_json for JSON parsing of OAR job data
- Implement oar_network_addresses() to get machine list from OAR job
- Add address_from_index() to map indices to 10.0.0.0/8 IP addresses
- Add machine list with bond0 interfaces for charmander cluster
- Configure musl target build in Justfile for cluster deployment
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <[email protected]>
|
| |
|