Posts with tag 06

485通信协议 vs TTL协议比较

2025-06-19
0625julyfunnotes

[ai]485通信协议 vs TTL协议比较基本概念RS-485协议:一种差分信号标准的串行通信协议采用平衡传输方式(两条信号线A/B)支持多点通信(最多32个节点)传输距离可达1200米(速率100kbps时)典型电压范围:-7V至+12VTTL协议:晶体管-晶体管逻辑电平标准单端信号传输(信号线+地线)通常用于板级通信传输距离短(一般不超过1米)典型电压范围:0V(低电平)至+5V(高电平)主要区别| 特性 | RS-485 | TTL | |-----------|---------------------|--------------------| | 信号类型 | 差分信号(A-B) | 单端信号(对地) | | 抗干扰能力 | 强 | 弱 | | 传输距离 | 可达1200米 | 通常<1米 | | 通信方式 | 半双工/全双工 | 通常全双工 | | 节点数量 | 最多32个(不加中继器) | 通常点对点 | | 电压范围 | -7V至+12V | 0V至+5V | | 速率 | 低速到高速(10Mbps) | 通常高速 | | 功耗 | 较高 | 较低 |应用场景RS-485典型应用:工业自动化控制系统楼宇自动化远程数据采集需要长距离传输的场合TTL典型应用:微控制器与外围器件通信板级设备间通信短距离高速通信嵌入式系统内部通信转换与兼容在实际应用中,常使用MAX485等芯片实现TTL与RS-485之间的电平转换,使微控制器能够接入RS-485网络

分析语言命名风格

2025-06-18
0625julyfunnotes

Prerequisite大小写和下划线风格仅是利用英语单词的信息冗余帮助快速理解符号的种类.不含大小写的语言将无法应用命名风格.PnpSolverConfigpnpSolverConfig单个单词时无法与小写下划线区分pnp_solver_configPNP_SOLVER_CONFIG单个字母时无法与大驼峰区分函数: 电路编码. 无状态. 有地址.函数指针:也是变量. 重载了 call 方法的变量.仅仅重载 call 方法的符号也可能可以运算.变量: 有地址.常量:无状态可变量:有状态模块 / namespace: 编译时符号类型: 编译时符号 / 内存分布规则scope全局当前函数成员注意:函数实际上只有全局.Rustfmt容易区分变量是哪种作用域:全局、成员、当前函数难以区分全局函数和局部函数.zig fmt容易区分变量是否重载了 call.容易区分变量是哪种作用域:全局、成员、当前函数难以区分全局变量和局部变量

pytorch 奇怪报错记录 for grep

2025-06-17
0625julyfunnotes

pytorch 奇怪报错记录 for grep25.6.17--------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) File /home/julyfun/Documents/GitHub/julyfun/how-to/notes/julyfun/技术学习/diffusion-models-class/unit2-02_class_conditioned_diffusion_model_example.py:3 1 # %% 2 @run ----> 3 def func(): 4 net = ClassConditionedUnet(num_classes=10).to(device) 5 img = torch.randn(3, 1, 28, 28).to(device) File ~/Documents/GitHub/julyfun/robotoy/robotoy/ziglike/test.py:58, in run(func) 57 def run(func): ---> 58 func() File /home/julyfun/Documents/GitHub/julyfun/how-to/notes/julyfun/技术学习/diffusion-models-class/unit2-02_class_conditioned_diffusion_model_example.py:4 2 @run 3 def func(): ----> 4 net = ClassConditionedUnet(num_classes=10).to(device) 5 img = torch.randn(3, 1, 28, 28).to(device) 6 cls = torch.tensor([0, 1, 9]).to(device) File ~/Documents/GitHub/diffusion-models-class/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py:1355, in Module.to(self, *args, **kwargs) 1352 else: 1353 raise -> 1355 return self._apply(convert) File ~/Documents/GitHub/diffusion-models-class/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py:915, in Module._apply(self, fn, recurse) 913 if recurse: 914 for module in self.children(): --> 915 module._apply(fn) 917 def compute_should_use_set_data(tensor, tensor_applied): 918 if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): 919 # If the new tensor has compatible tensor type as the existing tensor, 920 # the current behavior is to change the tensor in-place using `.data =`, (...) 925 # global flag to let the user control whether they want the future 926 # behavior of overwriting the existing tensor or not. File ~/Documents/GitHub/diffusion-models-class/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py:942, in Module._apply(self, fn, recurse) 938 # Tensors stored in modules are graph leaves, and we don't want to 939 # track autograd history of `param_applied`, so we have to use 940 # `with torch.no_grad():` 941 with torch.no_grad(): --> 942 param_applied = fn(param) 943 p_should_use_set_data = compute_should_use_set_data(param, param_applied) 945 # subclasses may have multiple child tensors so we need to use swap_tensors File ~/Documents/GitHub/diffusion-models-class/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py:1341, in Module.to.<locals>.convert(t) 1334 if convert_to_format is not None and t.dim() in (4, 5): 1335 return t.to( 1336 device, 1337 dtype if t.is_floating_point() or t.is_complex() else None, 1338 non_blocking, 1339 memory_format=convert_to_format, 1340 ) -> 1341 return t.to( 1342 device, 1343 dtype if t.is_floating_point() or t.is_complex() else None, 1344 non_blocking, 1345 ) 1346 except NotImplementedError as e: 1347 if str(e) == "Cannot copy out of meta tensor; no data!": RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1 Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.context see above.[ok, but why] 重启 kernel 运行就没问题了

No more posts to load.